00:00:00.001 Started by upstream project "autotest-per-patch" build number 122817 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.079 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.080 The recommended git tool is: git 00:00:00.080 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.110 Fetching changes from the remote Git repository 00:00:00.112 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.155 Using shallow fetch with depth 1 00:00:00.155 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.155 > git --version # timeout=10 00:00:00.178 > git --version # 'git version 2.39.2' 00:00:00.178 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.178 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.178 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.223 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.234 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.245 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:06.245 > git config core.sparsecheckout # timeout=10 00:00:06.255 > git read-tree -mu HEAD # timeout=10 00:00:06.269 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:06.284 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:06.284 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:06.358 [Pipeline] Start of Pipeline 00:00:06.372 [Pipeline] library 00:00:06.374 Loading library shm_lib@master 00:00:06.374 Library shm_lib@master is cached. Copying from home. 00:00:06.391 [Pipeline] node 00:00:06.397 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.402 [Pipeline] { 00:00:06.415 [Pipeline] catchError 00:00:06.417 [Pipeline] { 00:00:06.432 [Pipeline] wrap 00:00:06.445 [Pipeline] { 00:00:06.455 [Pipeline] stage 00:00:06.457 [Pipeline] { (Prologue) 00:00:06.633 [Pipeline] sh 00:00:06.911 + logger -p user.info -t JENKINS-CI 00:00:06.928 [Pipeline] echo 00:00:06.930 Node: GP11 00:00:06.937 [Pipeline] sh 00:00:07.245 [Pipeline] setCustomBuildProperty 00:00:07.256 [Pipeline] echo 00:00:07.257 Cleanup processes 00:00:07.263 [Pipeline] sh 00:00:07.543 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.543 657120 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.558 [Pipeline] sh 00:00:07.837 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.837 ++ grep -v 'sudo pgrep' 00:00:07.837 ++ awk '{print $1}' 00:00:07.837 + sudo kill -9 00:00:07.837 + true 00:00:07.850 [Pipeline] cleanWs 00:00:07.858 [WS-CLEANUP] Deleting project workspace... 00:00:07.858 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.864 [WS-CLEANUP] done 00:00:07.867 [Pipeline] setCustomBuildProperty 00:00:07.878 [Pipeline] sh 00:00:08.156 + sudo git config --global --replace-all safe.directory '*' 00:00:08.218 [Pipeline] nodesByLabel 00:00:08.219 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.230 [Pipeline] httpRequest 00:00:08.235 HttpMethod: GET 00:00:08.235 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.238 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.253 Response Code: HTTP/1.1 200 OK 00:00:08.253 Success: Status code 200 is in the accepted range: 200,404 00:00:08.254 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:13.796 [Pipeline] sh 00:00:14.080 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:14.100 [Pipeline] httpRequest 00:00:14.105 HttpMethod: GET 00:00:14.106 URL: http://10.211.164.101/packages/spdk_68960dff26103c36bc69a94395cbcf426be30468.tar.gz 00:00:14.106 Sending request to url: http://10.211.164.101/packages/spdk_68960dff26103c36bc69a94395cbcf426be30468.tar.gz 00:00:14.125 Response Code: HTTP/1.1 200 OK 00:00:14.125 Success: Status code 200 is in the accepted range: 200,404 00:00:14.126 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_68960dff26103c36bc69a94395cbcf426be30468.tar.gz 00:01:07.254 [Pipeline] sh 00:01:07.534 + tar --no-same-owner -xf spdk_68960dff26103c36bc69a94395cbcf426be30468.tar.gz 00:01:10.828 [Pipeline] sh 00:01:11.108 + git -C spdk log --oneline -n5 00:01:11.108 68960dff2 lib/event: Bug fix for framework_set_scheduler 00:01:11.108 4506c0c36 test/common: Enable inherit_errexit 00:01:11.108 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:01:11.108 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:01:11.108 1dc065205 test/scheduler: Calculate median of the cpu load samples 00:01:11.119 [Pipeline] } 00:01:11.137 [Pipeline] // stage 00:01:11.146 [Pipeline] stage 00:01:11.148 [Pipeline] { (Prepare) 00:01:11.168 [Pipeline] writeFile 00:01:11.194 [Pipeline] sh 00:01:11.473 + logger -p user.info -t JENKINS-CI 00:01:11.492 [Pipeline] sh 00:01:11.782 + logger -p user.info -t JENKINS-CI 00:01:11.811 [Pipeline] sh 00:01:12.092 + cat autorun-spdk.conf 00:01:12.092 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.092 SPDK_TEST_NVMF=1 00:01:12.092 SPDK_TEST_NVME_CLI=1 00:01:12.092 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.092 SPDK_TEST_NVMF_NICS=e810 00:01:12.092 SPDK_TEST_VFIOUSER=1 00:01:12.092 SPDK_RUN_UBSAN=1 00:01:12.092 NET_TYPE=phy 00:01:12.099 RUN_NIGHTLY=0 00:01:12.104 [Pipeline] readFile 00:01:12.130 [Pipeline] withEnv 00:01:12.132 [Pipeline] { 00:01:12.148 [Pipeline] sh 00:01:12.429 + set -ex 00:01:12.429 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:12.429 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.429 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.429 ++ SPDK_TEST_NVMF=1 00:01:12.429 ++ SPDK_TEST_NVME_CLI=1 00:01:12.429 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.429 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.429 ++ SPDK_TEST_VFIOUSER=1 00:01:12.429 ++ SPDK_RUN_UBSAN=1 00:01:12.429 ++ NET_TYPE=phy 00:01:12.429 ++ RUN_NIGHTLY=0 00:01:12.429 + case $SPDK_TEST_NVMF_NICS in 00:01:12.429 + DRIVERS=ice 00:01:12.429 + [[ tcp == \r\d\m\a ]] 00:01:12.429 + [[ -n ice ]] 00:01:12.429 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.429 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:16.616 rmmod: ERROR: Module irdma is not currently loaded 00:01:16.616 rmmod: ERROR: Module i40iw is not currently loaded 00:01:16.616 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:16.616 + true 00:01:16.616 + for D in $DRIVERS 00:01:16.616 + sudo modprobe ice 00:01:16.616 + exit 0 00:01:16.626 [Pipeline] } 00:01:16.647 [Pipeline] // withEnv 00:01:16.653 [Pipeline] } 00:01:16.671 [Pipeline] // stage 00:01:16.679 [Pipeline] catchError 00:01:16.680 [Pipeline] { 00:01:16.696 [Pipeline] timeout 00:01:16.696 Timeout set to expire in 40 min 00:01:16.698 [Pipeline] { 00:01:16.716 [Pipeline] stage 00:01:16.718 [Pipeline] { (Tests) 00:01:16.735 [Pipeline] sh 00:01:17.018 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.018 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.018 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.018 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:17.018 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.018 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.018 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:17.018 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.018 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.018 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.018 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.018 + source /etc/os-release 00:01:17.018 ++ NAME='Fedora Linux' 00:01:17.018 ++ VERSION='38 (Cloud Edition)' 00:01:17.018 ++ ID=fedora 00:01:17.018 ++ VERSION_ID=38 00:01:17.018 ++ VERSION_CODENAME= 00:01:17.018 ++ PLATFORM_ID=platform:f38 00:01:17.018 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:17.018 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.018 ++ LOGO=fedora-logo-icon 00:01:17.018 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:17.018 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.018 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:17.018 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.018 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.018 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.018 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:17.018 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.018 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:17.018 ++ SUPPORT_END=2024-05-14 00:01:17.018 ++ VARIANT='Cloud Edition' 00:01:17.018 ++ VARIANT_ID=cloud 00:01:17.018 + uname -a 00:01:17.018 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:17.018 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:18.394 Hugepages 00:01:18.394 node hugesize free / total 00:01:18.394 node0 1048576kB 0 / 0 00:01:18.394 node0 2048kB 0 / 0 00:01:18.394 node1 1048576kB 0 / 0 00:01:18.394 node1 2048kB 0 / 0 00:01:18.394 00:01:18.394 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.394 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:18.394 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:18.394 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:18.394 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:18.394 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:18.394 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:18.394 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:18.394 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:18.394 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:18.394 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:18.394 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:18.394 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:18.394 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:18.394 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:18.394 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:18.394 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:18.394 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:18.394 + rm -f /tmp/spdk-ld-path 00:01:18.394 + source autorun-spdk.conf 00:01:18.394 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.394 ++ SPDK_TEST_NVMF=1 00:01:18.394 ++ SPDK_TEST_NVME_CLI=1 00:01:18.394 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.394 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.394 ++ SPDK_TEST_VFIOUSER=1 00:01:18.394 ++ SPDK_RUN_UBSAN=1 00:01:18.394 ++ NET_TYPE=phy 00:01:18.394 ++ RUN_NIGHTLY=0 00:01:18.394 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.394 + [[ -n '' ]] 00:01:18.394 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.394 + for M in /var/spdk/build-*-manifest.txt 00:01:18.394 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.394 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.394 + for M in /var/spdk/build-*-manifest.txt 00:01:18.394 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.394 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.394 ++ uname 00:01:18.394 + [[ Linux == \L\i\n\u\x ]] 00:01:18.394 + sudo dmesg -T 00:01:18.394 + sudo dmesg --clear 00:01:18.394 + dmesg_pid=657910 00:01:18.394 + [[ Fedora Linux == FreeBSD ]] 00:01:18.394 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.394 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.394 + sudo dmesg -Tw 00:01:18.394 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.395 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:18.395 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:18.395 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.395 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.395 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.395 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.395 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.395 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.395 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.395 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.395 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.395 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.395 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.395 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.395 Test configuration: 00:01:18.395 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.395 SPDK_TEST_NVMF=1 00:01:18.395 SPDK_TEST_NVME_CLI=1 00:01:18.395 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.395 SPDK_TEST_NVMF_NICS=e810 00:01:18.395 SPDK_TEST_VFIOUSER=1 00:01:18.395 SPDK_RUN_UBSAN=1 00:01:18.395 NET_TYPE=phy 00:01:18.395 RUN_NIGHTLY=0 00:16:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:18.395 00:16:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.395 00:16:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.395 00:16:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.395 00:16:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.395 00:16:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.395 00:16:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.395 00:16:44 -- paths/export.sh@5 -- $ export PATH 00:01:18.395 00:16:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.395 00:16:44 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:18.395 00:16:44 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:18.395 00:16:44 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715725004.XXXXXX 00:01:18.395 00:16:44 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715725004.vwPEA2 00:01:18.395 00:16:44 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:18.395 00:16:44 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:18.395 00:16:44 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:18.395 00:16:44 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:18.395 00:16:44 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.395 00:16:44 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:18.395 00:16:44 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:18.395 00:16:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.395 00:16:44 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:18.395 00:16:44 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:18.395 00:16:44 -- pm/common@17 -- $ local monitor 00:01:18.395 00:16:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.395 00:16:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.395 00:16:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.395 00:16:44 -- pm/common@21 -- $ date +%s 00:01:18.395 00:16:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.395 00:16:44 -- pm/common@21 -- $ date +%s 00:01:18.395 00:16:44 -- pm/common@25 -- $ sleep 1 00:01:18.395 00:16:44 -- pm/common@21 -- $ date +%s 00:01:18.395 00:16:44 -- pm/common@21 -- $ date +%s 00:01:18.395 00:16:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715725004 00:01:18.395 00:16:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715725004 00:01:18.395 00:16:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715725004 00:01:18.395 00:16:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715725004 00:01:18.395 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715725004_collect-vmstat.pm.log 00:01:18.395 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715725004_collect-cpu-load.pm.log 00:01:18.395 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715725004_collect-cpu-temp.pm.log 00:01:18.395 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715725004_collect-bmc-pm.bmc.pm.log 00:01:19.333 00:16:45 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:19.333 00:16:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:19.333 00:16:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:19.333 00:16:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.333 00:16:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:19.333 Tue May 14 10:16:45 PM UTC 2024 00:01:19.333 00:16:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:19.333 v24.05-pre-659-g68960dff2 00:01:19.333 00:16:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:19.333 00:16:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.333 00:16:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.333 00:16:45 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:01:19.333 00:16:45 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:19.333 00:16:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.333 ************************************ 00:01:19.333 START TEST ubsan 00:01:19.333 ************************************ 00:01:19.333 00:16:45 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:01:19.333 using ubsan 00:01:19.333 00:01:19.333 real 0m0.000s 00:01:19.333 user 0m0.000s 00:01:19.333 sys 0m0.000s 00:01:19.333 00:16:45 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:01:19.333 00:16:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:19.333 ************************************ 00:01:19.333 END TEST ubsan 00:01:19.333 ************************************ 00:01:19.591 00:16:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:19.591 00:16:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:19.591 00:16:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:19.591 00:16:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:19.591 00:16:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:19.591 00:16:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:19.591 00:16:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:19.591 00:16:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:19.591 00:16:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:19.591 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:19.591 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:19.850 Using 'verbs' RDMA provider 00:01:30.396 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:40.412 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:40.412 Creating mk/config.mk...done. 00:01:40.412 Creating mk/cc.flags.mk...done. 00:01:40.412 Type 'make' to build. 00:01:40.412 00:17:05 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:40.412 00:17:05 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:01:40.412 00:17:05 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:40.412 00:17:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.412 ************************************ 00:01:40.412 START TEST make 00:01:40.412 ************************************ 00:01:40.412 00:17:05 make -- common/autotest_common.sh@1122 -- $ make -j48 00:01:40.412 make[1]: Nothing to be done for 'all'. 00:01:41.810 The Meson build system 00:01:41.810 Version: 1.3.1 00:01:41.810 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:41.811 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.811 Build type: native build 00:01:41.811 Project name: libvfio-user 00:01:41.811 Project version: 0.0.1 00:01:41.811 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:41.811 C linker for the host machine: cc ld.bfd 2.39-16 00:01:41.811 Host machine cpu family: x86_64 00:01:41.811 Host machine cpu: x86_64 00:01:41.811 Run-time dependency threads found: YES 00:01:41.811 Library dl found: YES 00:01:41.811 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:41.811 Run-time dependency json-c found: YES 0.17 00:01:41.811 Run-time dependency cmocka found: YES 1.1.7 00:01:41.811 Program pytest-3 found: NO 00:01:41.811 Program flake8 found: NO 00:01:41.811 Program misspell-fixer found: NO 00:01:41.811 Program restructuredtext-lint found: NO 00:01:41.811 Program valgrind found: YES (/usr/bin/valgrind) 00:01:41.811 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:41.811 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:41.811 Compiler for C supports arguments -Wwrite-strings: YES 00:01:41.811 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.811 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:41.811 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:41.811 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:41.811 Build targets in project: 8 00:01:41.811 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:41.811 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:41.811 00:01:41.811 libvfio-user 0.0.1 00:01:41.811 00:01:41.811 User defined options 00:01:41.811 buildtype : debug 00:01:41.811 default_library: shared 00:01:41.811 libdir : /usr/local/lib 00:01:41.811 00:01:41.811 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:42.760 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:42.760 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:42.760 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:42.760 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:42.760 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:42.760 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:42.760 [6/37] Compiling C object samples/null.p/null.c.o 00:01:42.760 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:42.760 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:42.760 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:43.020 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:43.020 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:43.020 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:43.020 [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:43.020 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:43.020 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:43.020 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:43.020 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:43.020 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:43.020 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:43.020 [20/37] Compiling C object samples/server.p/server.c.o 00:01:43.020 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:43.020 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:43.020 [23/37] Compiling C object samples/client.p/client.c.o 00:01:43.020 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:43.020 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:43.020 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:43.021 [27/37] Linking target samples/client 00:01:43.021 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:43.284 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:43.284 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:43.284 [31/37] Linking target test/unit_tests 00:01:43.284 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:43.549 [33/37] Linking target samples/null 00:01:43.549 [34/37] Linking target samples/server 00:01:43.549 [35/37] Linking target samples/gpio-pci-idio-16 00:01:43.549 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:43.549 [37/37] Linking target samples/lspci 00:01:43.549 INFO: autodetecting backend as ninja 00:01:43.549 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.549 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.128 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.128 ninja: no work to do. 00:01:49.407 The Meson build system 00:01:49.407 Version: 1.3.1 00:01:49.407 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:49.407 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:49.407 Build type: native build 00:01:49.407 Program cat found: YES (/usr/bin/cat) 00:01:49.407 Project name: DPDK 00:01:49.407 Project version: 23.11.0 00:01:49.407 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.407 C linker for the host machine: cc ld.bfd 2.39-16 00:01:49.407 Host machine cpu family: x86_64 00:01:49.407 Host machine cpu: x86_64 00:01:49.407 Message: ## Building in Developer Mode ## 00:01:49.407 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:49.407 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:49.407 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:49.407 Program python3 found: YES (/usr/bin/python3) 00:01:49.407 Program cat found: YES (/usr/bin/cat) 00:01:49.407 Compiler for C supports arguments -march=native: YES 00:01:49.407 Checking for size of "void *" : 8 00:01:49.407 Checking for size of "void *" : 8 (cached) 00:01:49.407 Library m found: YES 00:01:49.407 Library numa found: YES 00:01:49.407 Has header "numaif.h" : YES 00:01:49.407 Library fdt found: NO 00:01:49.407 Library execinfo found: NO 00:01:49.407 Has header "execinfo.h" : YES 00:01:49.407 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.407 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:49.407 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:49.407 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:49.407 Run-time dependency openssl found: YES 3.0.9 00:01:49.407 Run-time dependency libpcap found: YES 1.10.4 00:01:49.407 Has header "pcap.h" with dependency libpcap: YES 00:01:49.407 Compiler for C supports arguments -Wcast-qual: YES 00:01:49.407 Compiler for C supports arguments -Wdeprecated: YES 00:01:49.407 Compiler for C supports arguments -Wformat: YES 00:01:49.407 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:49.407 Compiler for C supports arguments -Wformat-security: NO 00:01:49.407 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.407 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:49.407 Compiler for C supports arguments -Wnested-externs: YES 00:01:49.407 Compiler for C supports arguments -Wold-style-definition: YES 00:01:49.407 Compiler for C supports arguments -Wpointer-arith: YES 00:01:49.407 Compiler for C supports arguments -Wsign-compare: YES 00:01:49.407 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:49.407 Compiler for C supports arguments -Wundef: YES 00:01:49.407 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.407 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:49.407 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:49.407 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.407 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:49.407 Program objdump found: YES (/usr/bin/objdump) 00:01:49.407 Compiler for C supports arguments -mavx512f: YES 00:01:49.407 Checking if "AVX512 checking" compiles: YES 00:01:49.407 Fetching value of define "__SSE4_2__" : 1 00:01:49.407 Fetching value of define "__AES__" : 1 00:01:49.407 Fetching value of define "__AVX__" : 1 00:01:49.407 Fetching value of define "__AVX2__" : (undefined) 00:01:49.408 Fetching value of define "__AVX512BW__" : (undefined) 00:01:49.408 Fetching value of define "__AVX512CD__" : (undefined) 00:01:49.408 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:49.408 Fetching value of define "__AVX512F__" : (undefined) 00:01:49.408 Fetching value of define "__AVX512VL__" : (undefined) 00:01:49.408 Fetching value of define "__PCLMUL__" : 1 00:01:49.408 Fetching value of define "__RDRND__" : 1 00:01:49.408 Fetching value of define "__RDSEED__" : (undefined) 00:01:49.408 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:49.408 Fetching value of define "__znver1__" : (undefined) 00:01:49.408 Fetching value of define "__znver2__" : (undefined) 00:01:49.408 Fetching value of define "__znver3__" : (undefined) 00:01:49.408 Fetching value of define "__znver4__" : (undefined) 00:01:49.408 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:49.408 Message: lib/log: Defining dependency "log" 00:01:49.408 Message: lib/kvargs: Defining dependency "kvargs" 00:01:49.408 Message: lib/telemetry: Defining dependency "telemetry" 00:01:49.408 Checking for function "getentropy" : NO 00:01:49.408 Message: lib/eal: Defining dependency "eal" 00:01:49.408 Message: lib/ring: Defining dependency "ring" 00:01:49.408 Message: lib/rcu: Defining dependency "rcu" 00:01:49.408 Message: lib/mempool: Defining dependency "mempool" 00:01:49.408 Message: lib/mbuf: Defining dependency "mbuf" 00:01:49.408 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:49.408 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.408 Compiler for C supports arguments -mpclmul: YES 00:01:49.408 Compiler for C supports arguments -maes: YES 00:01:49.408 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:49.408 Compiler for C supports arguments -mavx512bw: YES 00:01:49.408 Compiler for C supports arguments -mavx512dq: YES 00:01:49.408 Compiler for C supports arguments -mavx512vl: YES 00:01:49.408 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:49.408 Compiler for C supports arguments -mavx2: YES 00:01:49.408 Compiler for C supports arguments -mavx: YES 00:01:49.408 Message: lib/net: Defining dependency "net" 00:01:49.408 Message: lib/meter: Defining dependency "meter" 00:01:49.408 Message: lib/ethdev: Defining dependency "ethdev" 00:01:49.408 Message: lib/pci: Defining dependency "pci" 00:01:49.408 Message: lib/cmdline: Defining dependency "cmdline" 00:01:49.408 Message: lib/hash: Defining dependency "hash" 00:01:49.408 Message: lib/timer: Defining dependency "timer" 00:01:49.408 Message: lib/compressdev: Defining dependency "compressdev" 00:01:49.408 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:49.408 Message: lib/dmadev: Defining dependency "dmadev" 00:01:49.408 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:49.408 Message: lib/power: Defining dependency "power" 00:01:49.408 Message: lib/reorder: Defining dependency "reorder" 00:01:49.408 Message: lib/security: Defining dependency "security" 00:01:49.408 Has header "linux/userfaultfd.h" : YES 00:01:49.408 Has header "linux/vduse.h" : YES 00:01:49.408 Message: lib/vhost: Defining dependency "vhost" 00:01:49.408 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:49.408 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:49.408 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:49.408 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:49.408 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:49.408 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:49.408 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:49.408 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:49.408 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:49.408 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:49.408 Program doxygen found: YES (/usr/bin/doxygen) 00:01:49.408 Configuring doxy-api-html.conf using configuration 00:01:49.408 Configuring doxy-api-man.conf using configuration 00:01:49.408 Program mandb found: YES (/usr/bin/mandb) 00:01:49.408 Program sphinx-build found: NO 00:01:49.408 Configuring rte_build_config.h using configuration 00:01:49.408 Message: 00:01:49.408 ================= 00:01:49.408 Applications Enabled 00:01:49.408 ================= 00:01:49.408 00:01:49.408 apps: 00:01:49.408 00:01:49.408 00:01:49.408 Message: 00:01:49.408 ================= 00:01:49.408 Libraries Enabled 00:01:49.408 ================= 00:01:49.408 00:01:49.408 libs: 00:01:49.408 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:49.408 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:49.408 cryptodev, dmadev, power, reorder, security, vhost, 00:01:49.408 00:01:49.408 Message: 00:01:49.408 =============== 00:01:49.408 Drivers Enabled 00:01:49.408 =============== 00:01:49.408 00:01:49.408 common: 00:01:49.408 00:01:49.408 bus: 00:01:49.408 pci, vdev, 00:01:49.408 mempool: 00:01:49.408 ring, 00:01:49.408 dma: 00:01:49.408 00:01:49.408 net: 00:01:49.408 00:01:49.408 crypto: 00:01:49.408 00:01:49.408 compress: 00:01:49.408 00:01:49.408 vdpa: 00:01:49.408 00:01:49.408 00:01:49.408 Message: 00:01:49.408 ================= 00:01:49.408 Content Skipped 00:01:49.408 ================= 00:01:49.408 00:01:49.408 apps: 00:01:49.408 dumpcap: explicitly disabled via build config 00:01:49.408 graph: explicitly disabled via build config 00:01:49.408 pdump: explicitly disabled via build config 00:01:49.408 proc-info: explicitly disabled via build config 00:01:49.408 test-acl: explicitly disabled via build config 00:01:49.408 test-bbdev: explicitly disabled via build config 00:01:49.408 test-cmdline: explicitly disabled via build config 00:01:49.408 test-compress-perf: explicitly disabled via build config 00:01:49.408 test-crypto-perf: explicitly disabled via build config 00:01:49.408 test-dma-perf: explicitly disabled via build config 00:01:49.408 test-eventdev: explicitly disabled via build config 00:01:49.408 test-fib: explicitly disabled via build config 00:01:49.408 test-flow-perf: explicitly disabled via build config 00:01:49.408 test-gpudev: explicitly disabled via build config 00:01:49.408 test-mldev: explicitly disabled via build config 00:01:49.408 test-pipeline: explicitly disabled via build config 00:01:49.408 test-pmd: explicitly disabled via build config 00:01:49.408 test-regex: explicitly disabled via build config 00:01:49.408 test-sad: explicitly disabled via build config 00:01:49.408 test-security-perf: explicitly disabled via build config 00:01:49.408 00:01:49.408 libs: 00:01:49.408 metrics: explicitly disabled via build config 00:01:49.408 acl: explicitly disabled via build config 00:01:49.408 bbdev: explicitly disabled via build config 00:01:49.408 bitratestats: explicitly disabled via build config 00:01:49.408 bpf: explicitly disabled via build config 00:01:49.408 cfgfile: explicitly disabled via build config 00:01:49.408 distributor: explicitly disabled via build config 00:01:49.408 efd: explicitly disabled via build config 00:01:49.408 eventdev: explicitly disabled via build config 00:01:49.408 dispatcher: explicitly disabled via build config 00:01:49.408 gpudev: explicitly disabled via build config 00:01:49.408 gro: explicitly disabled via build config 00:01:49.408 gso: explicitly disabled via build config 00:01:49.408 ip_frag: explicitly disabled via build config 00:01:49.408 jobstats: explicitly disabled via build config 00:01:49.408 latencystats: explicitly disabled via build config 00:01:49.408 lpm: explicitly disabled via build config 00:01:49.408 member: explicitly disabled via build config 00:01:49.408 pcapng: explicitly disabled via build config 00:01:49.408 rawdev: explicitly disabled via build config 00:01:49.408 regexdev: explicitly disabled via build config 00:01:49.408 mldev: explicitly disabled via build config 00:01:49.408 rib: explicitly disabled via build config 00:01:49.408 sched: explicitly disabled via build config 00:01:49.408 stack: explicitly disabled via build config 00:01:49.408 ipsec: explicitly disabled via build config 00:01:49.408 pdcp: explicitly disabled via build config 00:01:49.408 fib: explicitly disabled via build config 00:01:49.408 port: explicitly disabled via build config 00:01:49.408 pdump: explicitly disabled via build config 00:01:49.408 table: explicitly disabled via build config 00:01:49.408 pipeline: explicitly disabled via build config 00:01:49.408 graph: explicitly disabled via build config 00:01:49.408 node: explicitly disabled via build config 00:01:49.408 00:01:49.408 drivers: 00:01:49.408 common/cpt: not in enabled drivers build config 00:01:49.408 common/dpaax: not in enabled drivers build config 00:01:49.408 common/iavf: not in enabled drivers build config 00:01:49.408 common/idpf: not in enabled drivers build config 00:01:49.408 common/mvep: not in enabled drivers build config 00:01:49.408 common/octeontx: not in enabled drivers build config 00:01:49.408 bus/auxiliary: not in enabled drivers build config 00:01:49.408 bus/cdx: not in enabled drivers build config 00:01:49.408 bus/dpaa: not in enabled drivers build config 00:01:49.408 bus/fslmc: not in enabled drivers build config 00:01:49.408 bus/ifpga: not in enabled drivers build config 00:01:49.408 bus/platform: not in enabled drivers build config 00:01:49.408 bus/vmbus: not in enabled drivers build config 00:01:49.408 common/cnxk: not in enabled drivers build config 00:01:49.408 common/mlx5: not in enabled drivers build config 00:01:49.408 common/nfp: not in enabled drivers build config 00:01:49.408 common/qat: not in enabled drivers build config 00:01:49.408 common/sfc_efx: not in enabled drivers build config 00:01:49.408 mempool/bucket: not in enabled drivers build config 00:01:49.408 mempool/cnxk: not in enabled drivers build config 00:01:49.408 mempool/dpaa: not in enabled drivers build config 00:01:49.408 mempool/dpaa2: not in enabled drivers build config 00:01:49.408 mempool/octeontx: not in enabled drivers build config 00:01:49.408 mempool/stack: not in enabled drivers build config 00:01:49.408 dma/cnxk: not in enabled drivers build config 00:01:49.408 dma/dpaa: not in enabled drivers build config 00:01:49.408 dma/dpaa2: not in enabled drivers build config 00:01:49.408 dma/hisilicon: not in enabled drivers build config 00:01:49.408 dma/idxd: not in enabled drivers build config 00:01:49.408 dma/ioat: not in enabled drivers build config 00:01:49.408 dma/skeleton: not in enabled drivers build config 00:01:49.408 net/af_packet: not in enabled drivers build config 00:01:49.408 net/af_xdp: not in enabled drivers build config 00:01:49.408 net/ark: not in enabled drivers build config 00:01:49.408 net/atlantic: not in enabled drivers build config 00:01:49.408 net/avp: not in enabled drivers build config 00:01:49.408 net/axgbe: not in enabled drivers build config 00:01:49.408 net/bnx2x: not in enabled drivers build config 00:01:49.408 net/bnxt: not in enabled drivers build config 00:01:49.409 net/bonding: not in enabled drivers build config 00:01:49.409 net/cnxk: not in enabled drivers build config 00:01:49.409 net/cpfl: not in enabled drivers build config 00:01:49.409 net/cxgbe: not in enabled drivers build config 00:01:49.409 net/dpaa: not in enabled drivers build config 00:01:49.409 net/dpaa2: not in enabled drivers build config 00:01:49.409 net/e1000: not in enabled drivers build config 00:01:49.409 net/ena: not in enabled drivers build config 00:01:49.409 net/enetc: not in enabled drivers build config 00:01:49.409 net/enetfec: not in enabled drivers build config 00:01:49.409 net/enic: not in enabled drivers build config 00:01:49.409 net/failsafe: not in enabled drivers build config 00:01:49.409 net/fm10k: not in enabled drivers build config 00:01:49.409 net/gve: not in enabled drivers build config 00:01:49.409 net/hinic: not in enabled drivers build config 00:01:49.409 net/hns3: not in enabled drivers build config 00:01:49.409 net/i40e: not in enabled drivers build config 00:01:49.409 net/iavf: not in enabled drivers build config 00:01:49.409 net/ice: not in enabled drivers build config 00:01:49.409 net/idpf: not in enabled drivers build config 00:01:49.409 net/igc: not in enabled drivers build config 00:01:49.409 net/ionic: not in enabled drivers build config 00:01:49.409 net/ipn3ke: not in enabled drivers build config 00:01:49.409 net/ixgbe: not in enabled drivers build config 00:01:49.409 net/mana: not in enabled drivers build config 00:01:49.409 net/memif: not in enabled drivers build config 00:01:49.409 net/mlx4: not in enabled drivers build config 00:01:49.409 net/mlx5: not in enabled drivers build config 00:01:49.409 net/mvneta: not in enabled drivers build config 00:01:49.409 net/mvpp2: not in enabled drivers build config 00:01:49.409 net/netvsc: not in enabled drivers build config 00:01:49.409 net/nfb: not in enabled drivers build config 00:01:49.409 net/nfp: not in enabled drivers build config 00:01:49.409 net/ngbe: not in enabled drivers build config 00:01:49.409 net/null: not in enabled drivers build config 00:01:49.409 net/octeontx: not in enabled drivers build config 00:01:49.409 net/octeon_ep: not in enabled drivers build config 00:01:49.409 net/pcap: not in enabled drivers build config 00:01:49.409 net/pfe: not in enabled drivers build config 00:01:49.409 net/qede: not in enabled drivers build config 00:01:49.409 net/ring: not in enabled drivers build config 00:01:49.409 net/sfc: not in enabled drivers build config 00:01:49.409 net/softnic: not in enabled drivers build config 00:01:49.409 net/tap: not in enabled drivers build config 00:01:49.409 net/thunderx: not in enabled drivers build config 00:01:49.409 net/txgbe: not in enabled drivers build config 00:01:49.409 net/vdev_netvsc: not in enabled drivers build config 00:01:49.409 net/vhost: not in enabled drivers build config 00:01:49.409 net/virtio: not in enabled drivers build config 00:01:49.409 net/vmxnet3: not in enabled drivers build config 00:01:49.409 raw/*: missing internal dependency, "rawdev" 00:01:49.409 crypto/armv8: not in enabled drivers build config 00:01:49.409 crypto/bcmfs: not in enabled drivers build config 00:01:49.409 crypto/caam_jr: not in enabled drivers build config 00:01:49.409 crypto/ccp: not in enabled drivers build config 00:01:49.409 crypto/cnxk: not in enabled drivers build config 00:01:49.409 crypto/dpaa_sec: not in enabled drivers build config 00:01:49.409 crypto/dpaa2_sec: not in enabled drivers build config 00:01:49.409 crypto/ipsec_mb: not in enabled drivers build config 00:01:49.409 crypto/mlx5: not in enabled drivers build config 00:01:49.409 crypto/mvsam: not in enabled drivers build config 00:01:49.409 crypto/nitrox: not in enabled drivers build config 00:01:49.409 crypto/null: not in enabled drivers build config 00:01:49.409 crypto/octeontx: not in enabled drivers build config 00:01:49.409 crypto/openssl: not in enabled drivers build config 00:01:49.409 crypto/scheduler: not in enabled drivers build config 00:01:49.409 crypto/uadk: not in enabled drivers build config 00:01:49.409 crypto/virtio: not in enabled drivers build config 00:01:49.409 compress/isal: not in enabled drivers build config 00:01:49.409 compress/mlx5: not in enabled drivers build config 00:01:49.409 compress/octeontx: not in enabled drivers build config 00:01:49.409 compress/zlib: not in enabled drivers build config 00:01:49.409 regex/*: missing internal dependency, "regexdev" 00:01:49.409 ml/*: missing internal dependency, "mldev" 00:01:49.409 vdpa/ifc: not in enabled drivers build config 00:01:49.409 vdpa/mlx5: not in enabled drivers build config 00:01:49.409 vdpa/nfp: not in enabled drivers build config 00:01:49.409 vdpa/sfc: not in enabled drivers build config 00:01:49.409 event/*: missing internal dependency, "eventdev" 00:01:49.409 baseband/*: missing internal dependency, "bbdev" 00:01:49.409 gpu/*: missing internal dependency, "gpudev" 00:01:49.409 00:01:49.409 00:01:49.409 Build targets in project: 85 00:01:49.409 00:01:49.409 DPDK 23.11.0 00:01:49.409 00:01:49.409 User defined options 00:01:49.409 buildtype : debug 00:01:49.409 default_library : shared 00:01:49.409 libdir : lib 00:01:49.409 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:49.409 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:49.409 c_link_args : 00:01:49.409 cpu_instruction_set: native 00:01:49.409 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:49.409 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:49.409 enable_docs : false 00:01:49.409 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:49.409 enable_kmods : false 00:01:49.409 tests : false 00:01:49.409 00:01:49.409 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.409 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:49.409 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:49.409 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:49.409 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:49.409 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:49.409 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:49.409 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:49.409 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:49.409 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:49.409 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:49.409 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:49.409 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:49.409 [12/265] Linking static target lib/librte_kvargs.a 00:01:49.409 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:49.409 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:49.409 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:49.409 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:49.409 [17/265] Linking static target lib/librte_log.a 00:01:49.409 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:49.409 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:49.674 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:49.674 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:49.938 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.200 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:50.200 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:50.201 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:50.201 [26/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:50.201 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.201 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.201 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:50.201 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:50.201 [31/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:50.201 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:50.201 [33/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:50.201 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:50.201 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:50.201 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.201 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:50.201 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:50.201 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:50.201 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:50.201 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:50.201 [42/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:50.201 [43/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:50.201 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:50.201 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:50.201 [46/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:50.201 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:50.201 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:50.469 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:50.469 [50/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:50.469 [51/265] Linking static target lib/librte_telemetry.a 00:01:50.469 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:50.469 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:50.469 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:50.469 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:50.469 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:50.469 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:50.469 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:50.469 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:50.469 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:50.469 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:50.469 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:50.469 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:50.469 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:50.469 [65/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.469 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:50.728 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:50.728 [68/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.728 [69/265] Linking static target lib/librte_pci.a 00:01:50.728 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:50.728 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:50.728 [72/265] Linking target lib/librte_log.so.24.0 00:01:50.728 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:50.728 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:50.728 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:50.728 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:50.728 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.728 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.728 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:50.728 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:50.987 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:50.987 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:50.987 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:50.987 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:50.987 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:50.987 [86/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:50.987 [87/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:50.987 [88/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:50.987 [89/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:50.987 [90/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:50.987 [91/265] Linking target lib/librte_kvargs.so.24.0 00:01:51.248 [92/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.248 [93/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.248 [94/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.248 [95/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.248 [96/265] Linking static target lib/librte_ring.a 00:01:51.248 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.248 [98/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.248 [99/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.248 [100/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.248 [101/265] Linking static target lib/librte_eal.a 00:01:51.248 [102/265] Linking static target lib/librte_meter.a 00:01:51.248 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.248 [104/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.248 [105/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.248 [106/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.248 [107/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.248 [108/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.248 [109/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.248 [110/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.509 [111/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.509 [112/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:51.509 [113/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.509 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.509 [115/265] Linking target lib/librte_telemetry.so.24.0 00:01:51.509 [116/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.509 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.509 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.509 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.509 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.509 [121/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.509 [122/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.509 [123/265] Linking static target lib/librte_rcu.a 00:01:51.509 [124/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:51.509 [125/265] Linking static target lib/librte_mempool.a 00:01:51.509 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.509 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.776 [128/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:51.776 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:51.776 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.776 [131/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:51.776 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.776 [133/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.776 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:51.776 [135/265] Linking static target lib/librte_cmdline.a 00:01:51.776 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:51.776 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:51.776 [138/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.034 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.034 [140/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.034 [141/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.034 [142/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.034 [143/265] Linking static target lib/librte_net.a 00:01:52.034 [144/265] Linking static target lib/librte_timer.a 00:01:52.034 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.034 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.034 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.034 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.034 [149/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.034 [150/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.034 [151/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.294 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.294 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.294 [154/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.294 [155/265] Linking static target lib/librte_dmadev.a 00:01:52.294 [156/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.294 [157/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.294 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.294 [159/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.294 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.552 [161/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.552 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.552 [163/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.552 [164/265] Linking static target lib/librte_compressdev.a 00:01:52.552 [165/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.552 [166/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.552 [167/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.552 [168/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.552 [169/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.552 [170/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.552 [171/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.552 [172/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.552 [173/265] Linking static target lib/librte_hash.a 00:01:52.552 [174/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.552 [175/265] Linking static target lib/librte_power.a 00:01:52.552 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.552 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.810 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.810 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.810 [180/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.810 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.810 [182/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.810 [183/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.810 [184/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.810 [185/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.810 [186/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.810 [187/265] Linking static target lib/librte_reorder.a 00:01:52.810 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.810 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.810 [190/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.810 [191/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.068 [192/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.068 [193/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.068 [194/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.068 [195/265] Linking static target lib/librte_security.a 00:01:53.068 [196/265] Linking static target lib/librte_mbuf.a 00:01:53.068 [197/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:53.068 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.068 [199/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:53.068 [200/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:53.068 [201/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.068 [202/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.068 [203/265] Linking static target drivers/librte_bus_pci.a 00:01:53.069 [204/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.069 [205/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.069 [206/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:53.069 [207/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.069 [208/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.069 [209/265] Linking static target drivers/librte_bus_vdev.a 00:01:53.069 [210/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.326 [211/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:53.326 [212/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.326 [213/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.326 [214/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.326 [215/265] Linking static target drivers/librte_mempool_ring.a 00:01:53.326 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.326 [217/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.326 [218/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.584 [219/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.584 [220/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.584 [221/265] Linking static target lib/librte_ethdev.a 00:01:53.584 [222/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.584 [223/265] Linking static target lib/librte_cryptodev.a 00:01:54.532 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.505 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:57.406 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.664 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.664 [228/265] Linking target lib/librte_eal.so.24.0 00:01:57.664 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:57.664 [230/265] Linking target lib/librte_ring.so.24.0 00:01:57.664 [231/265] Linking target lib/librte_meter.so.24.0 00:01:57.664 [232/265] Linking target lib/librte_pci.so.24.0 00:01:57.664 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:57.664 [234/265] Linking target lib/librte_timer.so.24.0 00:01:57.664 [235/265] Linking target lib/librte_dmadev.so.24.0 00:01:57.923 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:57.923 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:57.923 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:57.923 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:57.923 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:57.923 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:57.923 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:57.923 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:58.181 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:58.181 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:58.181 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:58.181 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:58.181 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:58.181 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:58.181 [250/265] Linking target lib/librte_compressdev.so.24.0 00:01:58.181 [251/265] Linking target lib/librte_net.so.24.0 00:01:58.181 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:58.440 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:58.440 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:58.440 [255/265] Linking target lib/librte_security.so.24.0 00:01:58.440 [256/265] Linking target lib/librte_hash.so.24.0 00:01:58.440 [257/265] Linking target lib/librte_cmdline.so.24.0 00:01:58.440 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:58.440 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:58.698 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:58.698 [261/265] Linking target lib/librte_power.so.24.0 00:02:01.228 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:01.228 [263/265] Linking static target lib/librte_vhost.a 00:02:02.603 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.603 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:02.603 INFO: autodetecting backend as ninja 00:02:02.603 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:03.169 CC lib/ut/ut.o 00:02:03.169 CC lib/log/log.o 00:02:03.169 CC lib/log/log_flags.o 00:02:03.169 CC lib/log/log_deprecated.o 00:02:03.169 CC lib/ut_mock/mock.o 00:02:03.428 LIB libspdk_ut_mock.a 00:02:03.428 LIB libspdk_log.a 00:02:03.428 LIB libspdk_ut.a 00:02:03.428 SO libspdk_ut_mock.so.6.0 00:02:03.428 SO libspdk_ut.so.2.0 00:02:03.428 SO libspdk_log.so.7.0 00:02:03.428 SYMLINK libspdk_ut_mock.so 00:02:03.428 SYMLINK libspdk_ut.so 00:02:03.428 SYMLINK libspdk_log.so 00:02:03.687 CXX lib/trace_parser/trace.o 00:02:03.687 CC lib/dma/dma.o 00:02:03.687 CC lib/util/base64.o 00:02:03.687 CC lib/util/bit_array.o 00:02:03.687 CC lib/util/cpuset.o 00:02:03.687 CC lib/util/crc16.o 00:02:03.687 CC lib/ioat/ioat.o 00:02:03.687 CC lib/util/crc32.o 00:02:03.687 CC lib/util/crc32c.o 00:02:03.687 CC lib/util/crc32_ieee.o 00:02:03.688 CC lib/util/crc64.o 00:02:03.688 CC lib/util/dif.o 00:02:03.688 CC lib/util/fd.o 00:02:03.688 CC lib/util/file.o 00:02:03.688 CC lib/util/hexlify.o 00:02:03.688 CC lib/util/iov.o 00:02:03.688 CC lib/util/math.o 00:02:03.688 CC lib/util/pipe.o 00:02:03.688 CC lib/util/strerror_tls.o 00:02:03.688 CC lib/util/string.o 00:02:03.688 CC lib/util/uuid.o 00:02:03.688 CC lib/util/fd_group.o 00:02:03.688 CC lib/util/xor.o 00:02:03.688 CC lib/util/zipf.o 00:02:03.688 CC lib/vfio_user/host/vfio_user_pci.o 00:02:03.688 CC lib/vfio_user/host/vfio_user.o 00:02:03.946 LIB libspdk_dma.a 00:02:03.946 SO libspdk_dma.so.4.0 00:02:03.946 SYMLINK libspdk_dma.so 00:02:03.946 LIB libspdk_ioat.a 00:02:03.946 SO libspdk_ioat.so.7.0 00:02:04.204 LIB libspdk_vfio_user.a 00:02:04.204 SYMLINK libspdk_ioat.so 00:02:04.204 SO libspdk_vfio_user.so.5.0 00:02:04.204 SYMLINK libspdk_vfio_user.so 00:02:04.204 LIB libspdk_util.a 00:02:04.204 SO libspdk_util.so.9.0 00:02:04.463 SYMLINK libspdk_util.so 00:02:04.721 CC lib/env_dpdk/env.o 00:02:04.721 CC lib/rdma/common.o 00:02:04.721 CC lib/conf/conf.o 00:02:04.721 CC lib/json/json_parse.o 00:02:04.721 CC lib/idxd/idxd.o 00:02:04.721 CC lib/env_dpdk/memory.o 00:02:04.721 CC lib/vmd/vmd.o 00:02:04.721 CC lib/rdma/rdma_verbs.o 00:02:04.721 CC lib/idxd/idxd_user.o 00:02:04.721 CC lib/vmd/led.o 00:02:04.721 CC lib/env_dpdk/pci.o 00:02:04.721 CC lib/json/json_util.o 00:02:04.721 CC lib/json/json_write.o 00:02:04.721 CC lib/env_dpdk/init.o 00:02:04.721 CC lib/env_dpdk/threads.o 00:02:04.721 CC lib/env_dpdk/pci_ioat.o 00:02:04.721 CC lib/env_dpdk/pci_virtio.o 00:02:04.721 CC lib/env_dpdk/pci_vmd.o 00:02:04.721 CC lib/env_dpdk/pci_idxd.o 00:02:04.721 CC lib/env_dpdk/pci_event.o 00:02:04.721 CC lib/env_dpdk/sigbus_handler.o 00:02:04.721 CC lib/env_dpdk/pci_dpdk.o 00:02:04.721 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:04.721 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:04.721 LIB libspdk_trace_parser.a 00:02:04.721 SO libspdk_trace_parser.so.5.0 00:02:04.980 SYMLINK libspdk_trace_parser.so 00:02:04.980 LIB libspdk_conf.a 00:02:04.980 SO libspdk_conf.so.6.0 00:02:04.980 LIB libspdk_rdma.a 00:02:04.980 LIB libspdk_json.a 00:02:04.980 SYMLINK libspdk_conf.so 00:02:04.980 SO libspdk_rdma.so.6.0 00:02:04.980 SO libspdk_json.so.6.0 00:02:04.980 SYMLINK libspdk_rdma.so 00:02:04.980 SYMLINK libspdk_json.so 00:02:05.239 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.239 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.239 CC lib/jsonrpc/jsonrpc_client.o 00:02:05.239 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.239 LIB libspdk_idxd.a 00:02:05.239 SO libspdk_idxd.so.12.0 00:02:05.239 SYMLINK libspdk_idxd.so 00:02:05.239 LIB libspdk_vmd.a 00:02:05.239 SO libspdk_vmd.so.6.0 00:02:05.497 SYMLINK libspdk_vmd.so 00:02:05.497 LIB libspdk_jsonrpc.a 00:02:05.497 SO libspdk_jsonrpc.so.6.0 00:02:05.497 SYMLINK libspdk_jsonrpc.so 00:02:05.755 CC lib/rpc/rpc.o 00:02:06.013 LIB libspdk_rpc.a 00:02:06.013 SO libspdk_rpc.so.6.0 00:02:06.013 SYMLINK libspdk_rpc.so 00:02:06.271 CC lib/trace/trace.o 00:02:06.271 CC lib/trace/trace_flags.o 00:02:06.271 CC lib/trace/trace_rpc.o 00:02:06.271 CC lib/keyring/keyring.o 00:02:06.271 CC lib/keyring/keyring_rpc.o 00:02:06.271 CC lib/notify/notify.o 00:02:06.271 CC lib/notify/notify_rpc.o 00:02:06.271 LIB libspdk_notify.a 00:02:06.271 SO libspdk_notify.so.6.0 00:02:06.529 LIB libspdk_keyring.a 00:02:06.529 SYMLINK libspdk_notify.so 00:02:06.529 LIB libspdk_trace.a 00:02:06.529 SO libspdk_keyring.so.1.0 00:02:06.529 SO libspdk_trace.so.10.0 00:02:06.529 SYMLINK libspdk_keyring.so 00:02:06.529 SYMLINK libspdk_trace.so 00:02:06.529 LIB libspdk_env_dpdk.a 00:02:06.787 SO libspdk_env_dpdk.so.14.0 00:02:06.787 CC lib/thread/thread.o 00:02:06.787 CC lib/thread/iobuf.o 00:02:06.787 CC lib/sock/sock.o 00:02:06.787 CC lib/sock/sock_rpc.o 00:02:06.787 SYMLINK libspdk_env_dpdk.so 00:02:07.046 LIB libspdk_sock.a 00:02:07.046 SO libspdk_sock.so.9.0 00:02:07.305 SYMLINK libspdk_sock.so 00:02:07.305 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.305 CC lib/nvme/nvme_ctrlr.o 00:02:07.305 CC lib/nvme/nvme_fabric.o 00:02:07.305 CC lib/nvme/nvme_ns_cmd.o 00:02:07.305 CC lib/nvme/nvme_ns.o 00:02:07.305 CC lib/nvme/nvme_pcie_common.o 00:02:07.305 CC lib/nvme/nvme_pcie.o 00:02:07.305 CC lib/nvme/nvme_qpair.o 00:02:07.305 CC lib/nvme/nvme.o 00:02:07.305 CC lib/nvme/nvme_quirks.o 00:02:07.305 CC lib/nvme/nvme_transport.o 00:02:07.305 CC lib/nvme/nvme_discovery.o 00:02:07.305 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.305 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.305 CC lib/nvme/nvme_tcp.o 00:02:07.305 CC lib/nvme/nvme_opal.o 00:02:07.305 CC lib/nvme/nvme_io_msg.o 00:02:07.305 CC lib/nvme/nvme_poll_group.o 00:02:07.305 CC lib/nvme/nvme_zns.o 00:02:07.305 CC lib/nvme/nvme_stubs.o 00:02:07.305 CC lib/nvme/nvme_auth.o 00:02:07.305 CC lib/nvme/nvme_cuse.o 00:02:07.305 CC lib/nvme/nvme_vfio_user.o 00:02:07.305 CC lib/nvme/nvme_rdma.o 00:02:08.236 LIB libspdk_thread.a 00:02:08.236 SO libspdk_thread.so.10.0 00:02:08.236 SYMLINK libspdk_thread.so 00:02:08.494 CC lib/blob/blobstore.o 00:02:08.494 CC lib/init/json_config.o 00:02:08.494 CC lib/blob/request.o 00:02:08.494 CC lib/accel/accel.o 00:02:08.494 CC lib/init/subsystem.o 00:02:08.494 CC lib/blob/zeroes.o 00:02:08.494 CC lib/init/subsystem_rpc.o 00:02:08.494 CC lib/accel/accel_rpc.o 00:02:08.494 CC lib/blob/blob_bs_dev.o 00:02:08.494 CC lib/accel/accel_sw.o 00:02:08.494 CC lib/init/rpc.o 00:02:08.494 CC lib/vfu_tgt/tgt_endpoint.o 00:02:08.494 CC lib/vfu_tgt/tgt_rpc.o 00:02:08.494 CC lib/virtio/virtio.o 00:02:08.494 CC lib/virtio/virtio_vhost_user.o 00:02:08.494 CC lib/virtio/virtio_vfio_user.o 00:02:08.494 CC lib/virtio/virtio_pci.o 00:02:08.752 LIB libspdk_init.a 00:02:08.752 SO libspdk_init.so.5.0 00:02:08.752 LIB libspdk_virtio.a 00:02:08.752 LIB libspdk_vfu_tgt.a 00:02:08.752 SYMLINK libspdk_init.so 00:02:09.010 SO libspdk_vfu_tgt.so.3.0 00:02:09.010 SO libspdk_virtio.so.7.0 00:02:09.010 SYMLINK libspdk_vfu_tgt.so 00:02:09.010 SYMLINK libspdk_virtio.so 00:02:09.010 CC lib/event/app.o 00:02:09.010 CC lib/event/reactor.o 00:02:09.010 CC lib/event/log_rpc.o 00:02:09.010 CC lib/event/app_rpc.o 00:02:09.010 CC lib/event/scheduler_static.o 00:02:09.603 LIB libspdk_event.a 00:02:09.603 SO libspdk_event.so.13.0 00:02:09.603 SYMLINK libspdk_event.so 00:02:09.604 LIB libspdk_accel.a 00:02:09.604 SO libspdk_accel.so.15.0 00:02:09.604 SYMLINK libspdk_accel.so 00:02:09.862 LIB libspdk_nvme.a 00:02:09.862 CC lib/bdev/bdev.o 00:02:09.862 CC lib/bdev/bdev_rpc.o 00:02:09.862 CC lib/bdev/bdev_zone.o 00:02:09.862 CC lib/bdev/part.o 00:02:09.862 CC lib/bdev/scsi_nvme.o 00:02:09.862 SO libspdk_nvme.so.13.0 00:02:10.120 SYMLINK libspdk_nvme.so 00:02:11.496 LIB libspdk_blob.a 00:02:11.496 SO libspdk_blob.so.11.0 00:02:11.496 SYMLINK libspdk_blob.so 00:02:11.754 CC lib/blobfs/blobfs.o 00:02:11.754 CC lib/lvol/lvol.o 00:02:11.754 CC lib/blobfs/tree.o 00:02:12.694 LIB libspdk_bdev.a 00:02:12.694 SO libspdk_bdev.so.15.0 00:02:12.694 LIB libspdk_blobfs.a 00:02:12.694 SO libspdk_blobfs.so.10.0 00:02:12.694 SYMLINK libspdk_bdev.so 00:02:12.694 LIB libspdk_lvol.a 00:02:12.694 SYMLINK libspdk_blobfs.so 00:02:12.694 SO libspdk_lvol.so.10.0 00:02:12.694 SYMLINK libspdk_lvol.so 00:02:12.694 CC lib/nbd/nbd.o 00:02:12.694 CC lib/ublk/ublk.o 00:02:12.694 CC lib/nbd/nbd_rpc.o 00:02:12.694 CC lib/ublk/ublk_rpc.o 00:02:12.694 CC lib/ftl/ftl_core.o 00:02:12.694 CC lib/scsi/dev.o 00:02:12.694 CC lib/nvmf/ctrlr.o 00:02:12.694 CC lib/nvmf/ctrlr_discovery.o 00:02:12.694 CC lib/scsi/lun.o 00:02:12.694 CC lib/ftl/ftl_init.o 00:02:12.694 CC lib/nvmf/ctrlr_bdev.o 00:02:12.694 CC lib/scsi/port.o 00:02:12.694 CC lib/ftl/ftl_layout.o 00:02:12.694 CC lib/nvmf/subsystem.o 00:02:12.694 CC lib/scsi/scsi.o 00:02:12.694 CC lib/ftl/ftl_debug.o 00:02:12.694 CC lib/nvmf/nvmf.o 00:02:12.694 CC lib/ftl/ftl_io.o 00:02:12.694 CC lib/scsi/scsi_bdev.o 00:02:12.694 CC lib/nvmf/nvmf_rpc.o 00:02:12.695 CC lib/scsi/scsi_pr.o 00:02:12.695 CC lib/nvmf/transport.o 00:02:12.695 CC lib/scsi/scsi_rpc.o 00:02:12.695 CC lib/ftl/ftl_sb.o 00:02:12.695 CC lib/nvmf/tcp.o 00:02:12.695 CC lib/ftl/ftl_l2p.o 00:02:12.695 CC lib/scsi/task.o 00:02:12.695 CC lib/ftl/ftl_l2p_flat.o 00:02:12.695 CC lib/nvmf/stubs.o 00:02:12.695 CC lib/ftl/ftl_nv_cache.o 00:02:12.695 CC lib/nvmf/mdns_server.o 00:02:12.695 CC lib/nvmf/vfio_user.o 00:02:12.695 CC lib/ftl/ftl_band.o 00:02:12.695 CC lib/nvmf/rdma.o 00:02:12.695 CC lib/ftl/ftl_band_ops.o 00:02:12.695 CC lib/nvmf/auth.o 00:02:12.695 CC lib/ftl/ftl_writer.o 00:02:12.695 CC lib/ftl/ftl_reloc.o 00:02:12.695 CC lib/ftl/ftl_rq.o 00:02:12.695 CC lib/ftl/ftl_l2p_cache.o 00:02:12.695 CC lib/ftl/ftl_p2l.o 00:02:12.695 CC lib/ftl/mngt/ftl_mngt.o 00:02:12.695 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:12.695 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:12.695 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:12.695 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:12.695 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:12.695 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:12.960 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:13.221 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:13.221 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:13.221 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:13.221 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:13.221 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:13.221 CC lib/ftl/utils/ftl_conf.o 00:02:13.221 CC lib/ftl/utils/ftl_md.o 00:02:13.221 CC lib/ftl/utils/ftl_mempool.o 00:02:13.221 CC lib/ftl/utils/ftl_bitmap.o 00:02:13.221 CC lib/ftl/utils/ftl_property.o 00:02:13.221 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:13.221 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:13.221 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:13.221 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:13.221 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:13.221 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:13.221 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:13.221 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:13.221 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:13.485 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:13.485 CC lib/ftl/base/ftl_base_dev.o 00:02:13.485 CC lib/ftl/base/ftl_base_bdev.o 00:02:13.485 CC lib/ftl/ftl_trace.o 00:02:13.485 LIB libspdk_nbd.a 00:02:13.485 SO libspdk_nbd.so.7.0 00:02:13.743 LIB libspdk_scsi.a 00:02:13.743 SYMLINK libspdk_nbd.so 00:02:13.743 SO libspdk_scsi.so.9.0 00:02:13.743 LIB libspdk_ublk.a 00:02:13.743 SYMLINK libspdk_scsi.so 00:02:13.743 SO libspdk_ublk.so.3.0 00:02:14.001 SYMLINK libspdk_ublk.so 00:02:14.001 CC lib/iscsi/conn.o 00:02:14.001 CC lib/iscsi/init_grp.o 00:02:14.001 CC lib/vhost/vhost.o 00:02:14.001 CC lib/vhost/vhost_rpc.o 00:02:14.001 CC lib/iscsi/iscsi.o 00:02:14.001 CC lib/vhost/vhost_scsi.o 00:02:14.001 CC lib/iscsi/md5.o 00:02:14.001 CC lib/vhost/vhost_blk.o 00:02:14.001 CC lib/iscsi/param.o 00:02:14.001 CC lib/vhost/rte_vhost_user.o 00:02:14.001 CC lib/iscsi/portal_grp.o 00:02:14.001 CC lib/iscsi/tgt_node.o 00:02:14.001 CC lib/iscsi/iscsi_subsystem.o 00:02:14.001 CC lib/iscsi/task.o 00:02:14.001 CC lib/iscsi/iscsi_rpc.o 00:02:14.259 LIB libspdk_ftl.a 00:02:14.259 SO libspdk_ftl.so.9.0 00:02:14.824 SYMLINK libspdk_ftl.so 00:02:15.081 LIB libspdk_vhost.a 00:02:15.339 SO libspdk_vhost.so.8.0 00:02:15.339 LIB libspdk_nvmf.a 00:02:15.339 SYMLINK libspdk_vhost.so 00:02:15.339 SO libspdk_nvmf.so.18.0 00:02:15.339 LIB libspdk_iscsi.a 00:02:15.339 SO libspdk_iscsi.so.8.0 00:02:15.597 SYMLINK libspdk_nvmf.so 00:02:15.597 SYMLINK libspdk_iscsi.so 00:02:15.856 CC module/vfu_device/vfu_virtio.o 00:02:15.856 CC module/vfu_device/vfu_virtio_blk.o 00:02:15.856 CC module/env_dpdk/env_dpdk_rpc.o 00:02:15.856 CC module/vfu_device/vfu_virtio_scsi.o 00:02:15.856 CC module/vfu_device/vfu_virtio_rpc.o 00:02:15.856 CC module/accel/dsa/accel_dsa.o 00:02:15.856 CC module/blob/bdev/blob_bdev.o 00:02:15.856 CC module/accel/error/accel_error.o 00:02:15.856 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:15.856 CC module/sock/posix/posix.o 00:02:15.856 CC module/accel/ioat/accel_ioat.o 00:02:15.856 CC module/scheduler/gscheduler/gscheduler.o 00:02:15.856 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:15.856 CC module/accel/dsa/accel_dsa_rpc.o 00:02:15.856 CC module/keyring/file/keyring.o 00:02:15.856 CC module/accel/error/accel_error_rpc.o 00:02:15.856 CC module/accel/iaa/accel_iaa.o 00:02:15.856 CC module/keyring/file/keyring_rpc.o 00:02:15.856 CC module/accel/iaa/accel_iaa_rpc.o 00:02:15.856 CC module/accel/ioat/accel_ioat_rpc.o 00:02:16.114 LIB libspdk_env_dpdk_rpc.a 00:02:16.114 SO libspdk_env_dpdk_rpc.so.6.0 00:02:16.114 SYMLINK libspdk_env_dpdk_rpc.so 00:02:16.114 LIB libspdk_scheduler_dpdk_governor.a 00:02:16.114 LIB libspdk_keyring_file.a 00:02:16.114 LIB libspdk_scheduler_gscheduler.a 00:02:16.114 SO libspdk_scheduler_gscheduler.so.4.0 00:02:16.114 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:16.114 SO libspdk_keyring_file.so.1.0 00:02:16.114 LIB libspdk_accel_error.a 00:02:16.114 LIB libspdk_accel_ioat.a 00:02:16.114 LIB libspdk_scheduler_dynamic.a 00:02:16.114 LIB libspdk_accel_iaa.a 00:02:16.114 SO libspdk_accel_error.so.2.0 00:02:16.114 SO libspdk_accel_ioat.so.6.0 00:02:16.114 SO libspdk_scheduler_dynamic.so.4.0 00:02:16.114 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:16.114 SYMLINK libspdk_scheduler_gscheduler.so 00:02:16.114 SYMLINK libspdk_keyring_file.so 00:02:16.114 LIB libspdk_accel_dsa.a 00:02:16.114 SO libspdk_accel_iaa.so.3.0 00:02:16.372 SO libspdk_accel_dsa.so.5.0 00:02:16.372 LIB libspdk_blob_bdev.a 00:02:16.372 SYMLINK libspdk_accel_ioat.so 00:02:16.372 SYMLINK libspdk_accel_error.so 00:02:16.372 SYMLINK libspdk_scheduler_dynamic.so 00:02:16.372 SO libspdk_blob_bdev.so.11.0 00:02:16.372 SYMLINK libspdk_accel_iaa.so 00:02:16.372 SYMLINK libspdk_accel_dsa.so 00:02:16.372 SYMLINK libspdk_blob_bdev.so 00:02:16.633 LIB libspdk_vfu_device.a 00:02:16.633 SO libspdk_vfu_device.so.3.0 00:02:16.633 CC module/bdev/delay/vbdev_delay.o 00:02:16.633 CC module/bdev/error/vbdev_error.o 00:02:16.633 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:16.633 CC module/bdev/error/vbdev_error_rpc.o 00:02:16.633 CC module/blobfs/bdev/blobfs_bdev.o 00:02:16.633 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:16.633 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:16.633 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:16.633 CC module/bdev/split/vbdev_split.o 00:02:16.633 CC module/bdev/aio/bdev_aio.o 00:02:16.633 CC module/bdev/malloc/bdev_malloc.o 00:02:16.633 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:16.633 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:16.633 CC module/bdev/lvol/vbdev_lvol.o 00:02:16.633 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:16.633 CC module/bdev/iscsi/bdev_iscsi.o 00:02:16.633 CC module/bdev/aio/bdev_aio_rpc.o 00:02:16.633 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:16.633 CC module/bdev/split/vbdev_split_rpc.o 00:02:16.633 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:16.633 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:16.633 CC module/bdev/raid/bdev_raid.o 00:02:16.633 CC module/bdev/gpt/gpt.o 00:02:16.633 CC module/bdev/nvme/bdev_nvme.o 00:02:16.633 CC module/bdev/passthru/vbdev_passthru.o 00:02:16.633 CC module/bdev/gpt/vbdev_gpt.o 00:02:16.633 CC module/bdev/null/bdev_null.o 00:02:16.633 CC module/bdev/raid/bdev_raid_rpc.o 00:02:16.633 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:16.633 CC module/bdev/ftl/bdev_ftl.o 00:02:16.633 CC module/bdev/null/bdev_null_rpc.o 00:02:16.633 CC module/bdev/raid/bdev_raid_sb.o 00:02:16.633 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:16.633 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:16.633 CC module/bdev/raid/raid0.o 00:02:16.633 CC module/bdev/nvme/nvme_rpc.o 00:02:16.633 CC module/bdev/raid/raid1.o 00:02:16.633 CC module/bdev/nvme/bdev_mdns_client.o 00:02:16.633 CC module/bdev/raid/concat.o 00:02:16.633 CC module/bdev/nvme/vbdev_opal.o 00:02:16.633 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:16.633 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:16.633 SYMLINK libspdk_vfu_device.so 00:02:16.891 LIB libspdk_sock_posix.a 00:02:16.891 SO libspdk_sock_posix.so.6.0 00:02:16.891 LIB libspdk_blobfs_bdev.a 00:02:16.891 LIB libspdk_bdev_zone_block.a 00:02:16.891 SYMLINK libspdk_sock_posix.so 00:02:16.891 SO libspdk_blobfs_bdev.so.6.0 00:02:16.891 LIB libspdk_bdev_split.a 00:02:16.891 SO libspdk_bdev_zone_block.so.6.0 00:02:17.149 SO libspdk_bdev_split.so.6.0 00:02:17.149 LIB libspdk_bdev_error.a 00:02:17.149 SYMLINK libspdk_blobfs_bdev.so 00:02:17.149 SO libspdk_bdev_error.so.6.0 00:02:17.149 LIB libspdk_bdev_null.a 00:02:17.149 LIB libspdk_bdev_ftl.a 00:02:17.149 SYMLINK libspdk_bdev_split.so 00:02:17.149 SO libspdk_bdev_null.so.6.0 00:02:17.149 SYMLINK libspdk_bdev_zone_block.so 00:02:17.149 LIB libspdk_bdev_gpt.a 00:02:17.149 SO libspdk_bdev_ftl.so.6.0 00:02:17.149 LIB libspdk_bdev_aio.a 00:02:17.149 SYMLINK libspdk_bdev_error.so 00:02:17.149 SO libspdk_bdev_gpt.so.6.0 00:02:17.149 LIB libspdk_bdev_passthru.a 00:02:17.149 SO libspdk_bdev_aio.so.6.0 00:02:17.149 SO libspdk_bdev_passthru.so.6.0 00:02:17.149 SYMLINK libspdk_bdev_null.so 00:02:17.149 SYMLINK libspdk_bdev_ftl.so 00:02:17.149 LIB libspdk_bdev_iscsi.a 00:02:17.149 LIB libspdk_bdev_delay.a 00:02:17.149 SYMLINK libspdk_bdev_gpt.so 00:02:17.149 SO libspdk_bdev_iscsi.so.6.0 00:02:17.149 SO libspdk_bdev_delay.so.6.0 00:02:17.149 SYMLINK libspdk_bdev_aio.so 00:02:17.149 LIB libspdk_bdev_malloc.a 00:02:17.149 SYMLINK libspdk_bdev_passthru.so 00:02:17.149 SO libspdk_bdev_malloc.so.6.0 00:02:17.149 SYMLINK libspdk_bdev_delay.so 00:02:17.149 SYMLINK libspdk_bdev_iscsi.so 00:02:17.406 LIB libspdk_bdev_lvol.a 00:02:17.406 SYMLINK libspdk_bdev_malloc.so 00:02:17.406 SO libspdk_bdev_lvol.so.6.0 00:02:17.406 LIB libspdk_bdev_virtio.a 00:02:17.406 SO libspdk_bdev_virtio.so.6.0 00:02:17.406 SYMLINK libspdk_bdev_lvol.so 00:02:17.406 SYMLINK libspdk_bdev_virtio.so 00:02:17.664 LIB libspdk_bdev_raid.a 00:02:17.664 SO libspdk_bdev_raid.so.6.0 00:02:17.921 SYMLINK libspdk_bdev_raid.so 00:02:18.853 LIB libspdk_bdev_nvme.a 00:02:18.853 SO libspdk_bdev_nvme.so.7.0 00:02:19.110 SYMLINK libspdk_bdev_nvme.so 00:02:19.368 CC module/event/subsystems/vmd/vmd.o 00:02:19.368 CC module/event/subsystems/iobuf/iobuf.o 00:02:19.368 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:19.368 CC module/event/subsystems/sock/sock.o 00:02:19.368 CC module/event/subsystems/scheduler/scheduler.o 00:02:19.368 CC module/event/subsystems/keyring/keyring.o 00:02:19.368 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:19.368 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:19.368 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:19.626 LIB libspdk_event_keyring.a 00:02:19.626 LIB libspdk_event_sock.a 00:02:19.626 LIB libspdk_event_vmd.a 00:02:19.626 LIB libspdk_event_vhost_blk.a 00:02:19.626 LIB libspdk_event_vfu_tgt.a 00:02:19.626 LIB libspdk_event_scheduler.a 00:02:19.626 LIB libspdk_event_iobuf.a 00:02:19.626 SO libspdk_event_keyring.so.1.0 00:02:19.626 SO libspdk_event_sock.so.5.0 00:02:19.626 SO libspdk_event_vhost_blk.so.3.0 00:02:19.626 SO libspdk_event_vmd.so.6.0 00:02:19.626 SO libspdk_event_scheduler.so.4.0 00:02:19.626 SO libspdk_event_vfu_tgt.so.3.0 00:02:19.626 SO libspdk_event_iobuf.so.3.0 00:02:19.626 SYMLINK libspdk_event_keyring.so 00:02:19.626 SYMLINK libspdk_event_sock.so 00:02:19.626 SYMLINK libspdk_event_vhost_blk.so 00:02:19.626 SYMLINK libspdk_event_vfu_tgt.so 00:02:19.626 SYMLINK libspdk_event_scheduler.so 00:02:19.626 SYMLINK libspdk_event_vmd.so 00:02:19.626 SYMLINK libspdk_event_iobuf.so 00:02:19.884 CC module/event/subsystems/accel/accel.o 00:02:19.884 LIB libspdk_event_accel.a 00:02:20.142 SO libspdk_event_accel.so.6.0 00:02:20.142 SYMLINK libspdk_event_accel.so 00:02:20.142 CC module/event/subsystems/bdev/bdev.o 00:02:20.400 LIB libspdk_event_bdev.a 00:02:20.400 SO libspdk_event_bdev.so.6.0 00:02:20.400 SYMLINK libspdk_event_bdev.so 00:02:20.657 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:20.657 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:20.657 CC module/event/subsystems/nbd/nbd.o 00:02:20.657 CC module/event/subsystems/scsi/scsi.o 00:02:20.657 CC module/event/subsystems/ublk/ublk.o 00:02:20.915 LIB libspdk_event_nbd.a 00:02:20.915 LIB libspdk_event_ublk.a 00:02:20.915 LIB libspdk_event_scsi.a 00:02:20.915 SO libspdk_event_nbd.so.6.0 00:02:20.915 SO libspdk_event_ublk.so.3.0 00:02:20.915 SO libspdk_event_scsi.so.6.0 00:02:20.915 SYMLINK libspdk_event_ublk.so 00:02:20.915 SYMLINK libspdk_event_nbd.so 00:02:20.915 SYMLINK libspdk_event_scsi.so 00:02:20.915 LIB libspdk_event_nvmf.a 00:02:20.915 SO libspdk_event_nvmf.so.6.0 00:02:20.915 SYMLINK libspdk_event_nvmf.so 00:02:21.173 CC module/event/subsystems/iscsi/iscsi.o 00:02:21.173 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:21.173 LIB libspdk_event_vhost_scsi.a 00:02:21.173 LIB libspdk_event_iscsi.a 00:02:21.173 SO libspdk_event_vhost_scsi.so.3.0 00:02:21.173 SO libspdk_event_iscsi.so.6.0 00:02:21.431 SYMLINK libspdk_event_vhost_scsi.so 00:02:21.431 SYMLINK libspdk_event_iscsi.so 00:02:21.431 SO libspdk.so.6.0 00:02:21.431 SYMLINK libspdk.so 00:02:21.700 CXX app/trace/trace.o 00:02:21.700 CC app/spdk_nvme_perf/perf.o 00:02:21.700 CC app/trace_record/trace_record.o 00:02:21.700 CC app/spdk_top/spdk_top.o 00:02:21.700 TEST_HEADER include/spdk/accel.h 00:02:21.700 CC app/spdk_nvme_identify/identify.o 00:02:21.700 CC test/rpc_client/rpc_client_test.o 00:02:21.700 TEST_HEADER include/spdk/accel_module.h 00:02:21.700 CC app/spdk_lspci/spdk_lspci.o 00:02:21.700 TEST_HEADER include/spdk/assert.h 00:02:21.700 TEST_HEADER include/spdk/barrier.h 00:02:21.700 CC app/spdk_nvme_discover/discovery_aer.o 00:02:21.700 TEST_HEADER include/spdk/base64.h 00:02:21.700 TEST_HEADER include/spdk/bdev.h 00:02:21.700 TEST_HEADER include/spdk/bdev_module.h 00:02:21.700 TEST_HEADER include/spdk/bdev_zone.h 00:02:21.700 TEST_HEADER include/spdk/bit_array.h 00:02:21.700 TEST_HEADER include/spdk/bit_pool.h 00:02:21.701 TEST_HEADER include/spdk/blob_bdev.h 00:02:21.701 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:21.701 TEST_HEADER include/spdk/blobfs.h 00:02:21.701 TEST_HEADER include/spdk/blob.h 00:02:21.701 TEST_HEADER include/spdk/conf.h 00:02:21.701 TEST_HEADER include/spdk/config.h 00:02:21.701 TEST_HEADER include/spdk/cpuset.h 00:02:21.701 TEST_HEADER include/spdk/crc16.h 00:02:21.701 TEST_HEADER include/spdk/crc32.h 00:02:21.701 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:21.701 TEST_HEADER include/spdk/crc64.h 00:02:21.701 TEST_HEADER include/spdk/dif.h 00:02:21.701 CC app/spdk_dd/spdk_dd.o 00:02:21.701 TEST_HEADER include/spdk/dma.h 00:02:21.701 CC app/nvmf_tgt/nvmf_main.o 00:02:21.701 TEST_HEADER include/spdk/endian.h 00:02:21.701 TEST_HEADER include/spdk/env_dpdk.h 00:02:21.701 TEST_HEADER include/spdk/env.h 00:02:21.701 TEST_HEADER include/spdk/event.h 00:02:21.701 TEST_HEADER include/spdk/fd_group.h 00:02:21.701 CC app/iscsi_tgt/iscsi_tgt.o 00:02:21.701 CC app/vhost/vhost.o 00:02:21.701 TEST_HEADER include/spdk/fd.h 00:02:21.701 TEST_HEADER include/spdk/file.h 00:02:21.701 TEST_HEADER include/spdk/ftl.h 00:02:21.701 TEST_HEADER include/spdk/gpt_spec.h 00:02:21.701 TEST_HEADER include/spdk/hexlify.h 00:02:21.701 TEST_HEADER include/spdk/histogram_data.h 00:02:21.701 TEST_HEADER include/spdk/idxd.h 00:02:21.701 TEST_HEADER include/spdk/idxd_spec.h 00:02:21.701 TEST_HEADER include/spdk/init.h 00:02:21.701 TEST_HEADER include/spdk/ioat.h 00:02:21.701 TEST_HEADER include/spdk/ioat_spec.h 00:02:21.701 CC test/event/event_perf/event_perf.o 00:02:21.701 CC examples/sock/hello_world/hello_sock.o 00:02:21.701 TEST_HEADER include/spdk/iscsi_spec.h 00:02:21.701 CC app/spdk_tgt/spdk_tgt.o 00:02:21.701 CC examples/accel/perf/accel_perf.o 00:02:21.701 TEST_HEADER include/spdk/json.h 00:02:21.701 CC examples/nvme/hotplug/hotplug.o 00:02:21.701 CC examples/vmd/lsvmd/lsvmd.o 00:02:21.701 CC examples/ioat/verify/verify.o 00:02:21.701 TEST_HEADER include/spdk/jsonrpc.h 00:02:21.701 CC examples/vmd/led/led.o 00:02:21.701 CC examples/util/zipf/zipf.o 00:02:21.701 CC examples/nvme/reconnect/reconnect.o 00:02:21.701 CC examples/ioat/perf/perf.o 00:02:21.701 TEST_HEADER include/spdk/keyring.h 00:02:21.701 CC examples/nvme/hello_world/hello_world.o 00:02:21.701 CC test/thread/poller_perf/poller_perf.o 00:02:21.701 CC examples/nvme/abort/abort.o 00:02:21.701 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:21.701 TEST_HEADER include/spdk/keyring_module.h 00:02:21.701 CC examples/nvme/arbitration/arbitration.o 00:02:21.701 CC examples/idxd/perf/perf.o 00:02:21.701 TEST_HEADER include/spdk/likely.h 00:02:21.701 TEST_HEADER include/spdk/log.h 00:02:21.959 TEST_HEADER include/spdk/lvol.h 00:02:21.959 CC app/fio/nvme/fio_plugin.o 00:02:21.959 CC test/nvme/aer/aer.o 00:02:21.959 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:21.959 TEST_HEADER include/spdk/memory.h 00:02:21.959 TEST_HEADER include/spdk/mmio.h 00:02:21.959 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:21.959 TEST_HEADER include/spdk/nbd.h 00:02:21.959 TEST_HEADER include/spdk/notify.h 00:02:21.959 TEST_HEADER include/spdk/nvme.h 00:02:21.959 TEST_HEADER include/spdk/nvme_intel.h 00:02:21.959 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:21.959 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:21.959 TEST_HEADER include/spdk/nvme_spec.h 00:02:21.959 TEST_HEADER include/spdk/nvme_zns.h 00:02:21.959 CC examples/blob/hello_world/hello_blob.o 00:02:21.960 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:21.960 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:21.960 CC examples/blob/cli/blobcli.o 00:02:21.960 CC examples/nvmf/nvmf/nvmf.o 00:02:21.960 CC examples/bdev/hello_world/hello_bdev.o 00:02:21.960 TEST_HEADER include/spdk/nvmf.h 00:02:21.960 CC test/accel/dif/dif.o 00:02:21.960 CC examples/thread/thread/thread_ex.o 00:02:21.960 TEST_HEADER include/spdk/nvmf_spec.h 00:02:21.960 CC test/blobfs/mkfs/mkfs.o 00:02:21.960 CC test/app/bdev_svc/bdev_svc.o 00:02:21.960 TEST_HEADER include/spdk/nvmf_transport.h 00:02:21.960 CC test/bdev/bdevio/bdevio.o 00:02:21.960 CC examples/bdev/bdevperf/bdevperf.o 00:02:21.960 TEST_HEADER include/spdk/opal.h 00:02:21.960 TEST_HEADER include/spdk/opal_spec.h 00:02:21.960 TEST_HEADER include/spdk/pci_ids.h 00:02:21.960 TEST_HEADER include/spdk/pipe.h 00:02:21.960 TEST_HEADER include/spdk/queue.h 00:02:21.960 TEST_HEADER include/spdk/reduce.h 00:02:21.960 CC test/dma/test_dma/test_dma.o 00:02:21.960 TEST_HEADER include/spdk/rpc.h 00:02:21.960 TEST_HEADER include/spdk/scheduler.h 00:02:21.960 TEST_HEADER include/spdk/scsi.h 00:02:21.960 TEST_HEADER include/spdk/scsi_spec.h 00:02:21.960 TEST_HEADER include/spdk/sock.h 00:02:21.960 TEST_HEADER include/spdk/stdinc.h 00:02:21.960 TEST_HEADER include/spdk/string.h 00:02:21.960 TEST_HEADER include/spdk/thread.h 00:02:21.960 TEST_HEADER include/spdk/trace.h 00:02:21.960 TEST_HEADER include/spdk/trace_parser.h 00:02:21.960 TEST_HEADER include/spdk/tree.h 00:02:21.960 CC test/env/mem_callbacks/mem_callbacks.o 00:02:21.960 TEST_HEADER include/spdk/ublk.h 00:02:21.960 CC test/lvol/esnap/esnap.o 00:02:21.960 TEST_HEADER include/spdk/util.h 00:02:21.960 LINK spdk_lspci 00:02:21.960 TEST_HEADER include/spdk/uuid.h 00:02:21.960 TEST_HEADER include/spdk/version.h 00:02:21.960 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:21.960 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:21.960 TEST_HEADER include/spdk/vhost.h 00:02:21.960 TEST_HEADER include/spdk/vmd.h 00:02:21.960 TEST_HEADER include/spdk/xor.h 00:02:21.960 TEST_HEADER include/spdk/zipf.h 00:02:21.960 CXX test/cpp_headers/accel.o 00:02:21.960 LINK rpc_client_test 00:02:22.223 LINK lsvmd 00:02:22.223 LINK spdk_nvme_discover 00:02:22.223 LINK event_perf 00:02:22.223 LINK led 00:02:22.223 LINK interrupt_tgt 00:02:22.223 LINK poller_perf 00:02:22.223 LINK nvmf_tgt 00:02:22.223 LINK zipf 00:02:22.223 LINK vhost 00:02:22.223 LINK spdk_trace_record 00:02:22.223 LINK iscsi_tgt 00:02:22.223 LINK pmr_persistence 00:02:22.223 LINK cmb_copy 00:02:22.223 LINK spdk_tgt 00:02:22.223 LINK ioat_perf 00:02:22.223 LINK verify 00:02:22.223 LINK bdev_svc 00:02:22.223 LINK hello_sock 00:02:22.223 LINK mkfs 00:02:22.223 LINK hello_world 00:02:22.223 LINK hotplug 00:02:22.223 CXX test/cpp_headers/accel_module.o 00:02:22.486 LINK hello_blob 00:02:22.486 LINK hello_bdev 00:02:22.486 LINK aer 00:02:22.486 LINK thread 00:02:22.486 LINK spdk_dd 00:02:22.486 LINK nvmf 00:02:22.486 LINK idxd_perf 00:02:22.486 CXX test/cpp_headers/assert.o 00:02:22.486 CXX test/cpp_headers/barrier.o 00:02:22.486 LINK arbitration 00:02:22.486 LINK reconnect 00:02:22.486 LINK abort 00:02:22.486 LINK spdk_trace 00:02:22.486 CXX test/cpp_headers/base64.o 00:02:22.486 CC test/env/vtophys/vtophys.o 00:02:22.486 CC test/event/reactor/reactor.o 00:02:22.486 CC test/nvme/reset/reset.o 00:02:22.486 LINK dif 00:02:22.486 CXX test/cpp_headers/bdev.o 00:02:22.486 CC test/app/histogram_perf/histogram_perf.o 00:02:22.760 CC app/fio/bdev/fio_plugin.o 00:02:22.760 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:22.760 LINK bdevio 00:02:22.760 LINK test_dma 00:02:22.760 CC test/nvme/sgl/sgl.o 00:02:22.760 CC test/event/reactor_perf/reactor_perf.o 00:02:22.760 LINK accel_perf 00:02:22.760 CC test/app/jsoncat/jsoncat.o 00:02:22.760 CC test/event/app_repeat/app_repeat.o 00:02:22.760 LINK nvme_manage 00:02:22.760 CXX test/cpp_headers/bdev_module.o 00:02:22.760 CC test/env/memory/memory_ut.o 00:02:22.760 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:22.760 CXX test/cpp_headers/bdev_zone.o 00:02:22.760 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:22.760 CC test/app/stub/stub.o 00:02:22.760 CC test/nvme/e2edp/nvme_dp.o 00:02:22.760 CXX test/cpp_headers/bit_array.o 00:02:22.760 CXX test/cpp_headers/bit_pool.o 00:02:22.760 CXX test/cpp_headers/blob_bdev.o 00:02:22.760 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:22.760 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:22.760 CXX test/cpp_headers/blobfs_bdev.o 00:02:22.760 CC test/nvme/overhead/overhead.o 00:02:22.760 CC test/nvme/err_injection/err_injection.o 00:02:22.760 CXX test/cpp_headers/blobfs.o 00:02:22.760 LINK vtophys 00:02:22.760 LINK blobcli 00:02:22.760 CC test/event/scheduler/scheduler.o 00:02:22.760 LINK reactor 00:02:22.760 CC test/env/pci/pci_ut.o 00:02:23.064 CXX test/cpp_headers/blob.o 00:02:23.064 LINK histogram_perf 00:02:23.064 LINK spdk_nvme 00:02:23.064 CC test/nvme/startup/startup.o 00:02:23.064 LINK env_dpdk_post_init 00:02:23.064 CC test/nvme/reserve/reserve.o 00:02:23.064 LINK reactor_perf 00:02:23.064 CC test/nvme/simple_copy/simple_copy.o 00:02:23.064 LINK jsoncat 00:02:23.064 CXX test/cpp_headers/conf.o 00:02:23.064 CC test/nvme/connect_stress/connect_stress.o 00:02:23.064 CC test/nvme/boot_partition/boot_partition.o 00:02:23.064 LINK app_repeat 00:02:23.064 CXX test/cpp_headers/config.o 00:02:23.064 LINK reset 00:02:23.064 CXX test/cpp_headers/cpuset.o 00:02:23.064 CXX test/cpp_headers/crc16.o 00:02:23.064 CXX test/cpp_headers/crc32.o 00:02:23.064 CC test/nvme/compliance/nvme_compliance.o 00:02:23.064 LINK mem_callbacks 00:02:23.064 LINK stub 00:02:23.064 CXX test/cpp_headers/crc64.o 00:02:23.064 LINK sgl 00:02:23.335 CC test/nvme/fused_ordering/fused_ordering.o 00:02:23.335 CXX test/cpp_headers/dif.o 00:02:23.335 CXX test/cpp_headers/dma.o 00:02:23.335 LINK spdk_nvme_perf 00:02:23.335 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:23.335 CXX test/cpp_headers/endian.o 00:02:23.335 CXX test/cpp_headers/env_dpdk.o 00:02:23.335 CXX test/cpp_headers/env.o 00:02:23.335 CC test/nvme/fdp/fdp.o 00:02:23.335 CXX test/cpp_headers/event.o 00:02:23.335 CXX test/cpp_headers/fd_group.o 00:02:23.335 LINK spdk_nvme_identify 00:02:23.335 CXX test/cpp_headers/fd.o 00:02:23.335 LINK err_injection 00:02:23.335 CXX test/cpp_headers/file.o 00:02:23.335 CXX test/cpp_headers/ftl.o 00:02:23.335 CXX test/cpp_headers/gpt_spec.o 00:02:23.335 CXX test/cpp_headers/hexlify.o 00:02:23.335 CC test/nvme/cuse/cuse.o 00:02:23.335 LINK scheduler 00:02:23.335 LINK nvme_dp 00:02:23.335 LINK startup 00:02:23.335 LINK bdevperf 00:02:23.335 CXX test/cpp_headers/histogram_data.o 00:02:23.335 LINK spdk_top 00:02:23.335 CXX test/cpp_headers/idxd.o 00:02:23.335 CXX test/cpp_headers/idxd_spec.o 00:02:23.335 LINK boot_partition 00:02:23.335 LINK connect_stress 00:02:23.335 LINK reserve 00:02:23.335 CXX test/cpp_headers/init.o 00:02:23.335 CXX test/cpp_headers/ioat.o 00:02:23.335 LINK overhead 00:02:23.335 CXX test/cpp_headers/ioat_spec.o 00:02:23.593 CXX test/cpp_headers/iscsi_spec.o 00:02:23.593 LINK simple_copy 00:02:23.593 CXX test/cpp_headers/json.o 00:02:23.593 CXX test/cpp_headers/jsonrpc.o 00:02:23.593 CXX test/cpp_headers/keyring_module.o 00:02:23.593 CXX test/cpp_headers/keyring.o 00:02:23.593 CXX test/cpp_headers/likely.o 00:02:23.593 LINK nvme_fuzz 00:02:23.593 CXX test/cpp_headers/log.o 00:02:23.593 LINK spdk_bdev 00:02:23.593 CXX test/cpp_headers/lvol.o 00:02:23.593 CXX test/cpp_headers/memory.o 00:02:23.593 CXX test/cpp_headers/mmio.o 00:02:23.593 LINK doorbell_aers 00:02:23.593 CXX test/cpp_headers/nbd.o 00:02:23.593 CXX test/cpp_headers/notify.o 00:02:23.593 CXX test/cpp_headers/nvme.o 00:02:23.593 LINK fused_ordering 00:02:23.593 CXX test/cpp_headers/nvme_intel.o 00:02:23.593 LINK vhost_fuzz 00:02:23.593 CXX test/cpp_headers/nvme_ocssd.o 00:02:23.593 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:23.593 CXX test/cpp_headers/nvme_spec.o 00:02:23.593 LINK pci_ut 00:02:23.593 CXX test/cpp_headers/nvme_zns.o 00:02:23.593 CXX test/cpp_headers/nvmf_cmd.o 00:02:23.593 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:23.593 CXX test/cpp_headers/nvmf.o 00:02:23.593 CXX test/cpp_headers/nvmf_spec.o 00:02:23.593 CXX test/cpp_headers/nvmf_transport.o 00:02:23.593 CXX test/cpp_headers/opal.o 00:02:23.855 CXX test/cpp_headers/opal_spec.o 00:02:23.855 CXX test/cpp_headers/pci_ids.o 00:02:23.855 CXX test/cpp_headers/pipe.o 00:02:23.855 CXX test/cpp_headers/queue.o 00:02:23.855 CXX test/cpp_headers/reduce.o 00:02:23.855 CXX test/cpp_headers/rpc.o 00:02:23.855 LINK nvme_compliance 00:02:23.855 CXX test/cpp_headers/scheduler.o 00:02:23.855 CXX test/cpp_headers/scsi.o 00:02:23.855 CXX test/cpp_headers/scsi_spec.o 00:02:23.855 CXX test/cpp_headers/sock.o 00:02:23.855 CXX test/cpp_headers/stdinc.o 00:02:23.855 CXX test/cpp_headers/string.o 00:02:23.855 CXX test/cpp_headers/thread.o 00:02:23.855 CXX test/cpp_headers/trace.o 00:02:23.855 CXX test/cpp_headers/trace_parser.o 00:02:23.855 CXX test/cpp_headers/tree.o 00:02:23.855 CXX test/cpp_headers/ublk.o 00:02:23.855 CXX test/cpp_headers/util.o 00:02:23.855 CXX test/cpp_headers/uuid.o 00:02:23.855 CXX test/cpp_headers/version.o 00:02:23.855 CXX test/cpp_headers/vfio_user_pci.o 00:02:23.855 CXX test/cpp_headers/vfio_user_spec.o 00:02:23.855 CXX test/cpp_headers/vhost.o 00:02:23.855 CXX test/cpp_headers/vmd.o 00:02:23.855 LINK fdp 00:02:23.855 CXX test/cpp_headers/xor.o 00:02:23.855 CXX test/cpp_headers/zipf.o 00:02:24.418 LINK memory_ut 00:02:24.982 LINK cuse 00:02:24.982 LINK iscsi_fuzz 00:02:27.511 LINK esnap 00:02:27.769 00:02:27.769 real 0m47.725s 00:02:27.769 user 10m4.095s 00:02:27.769 sys 2m26.198s 00:02:27.769 00:17:53 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:02:27.769 00:17:53 make -- common/autotest_common.sh@10 -- $ set +x 00:02:27.769 ************************************ 00:02:27.769 END TEST make 00:02:27.769 ************************************ 00:02:27.769 00:17:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:27.769 00:17:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:27.769 00:17:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:27.769 00:17:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.769 00:17:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:27.769 00:17:53 -- pm/common@44 -- $ pid=657945 00:02:27.769 00:17:53 -- pm/common@50 -- $ kill -TERM 657945 00:02:27.769 00:17:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.769 00:17:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:27.769 00:17:53 -- pm/common@44 -- $ pid=657947 00:02:27.769 00:17:53 -- pm/common@50 -- $ kill -TERM 657947 00:02:27.769 00:17:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.769 00:17:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:27.769 00:17:53 -- pm/common@44 -- $ pid=657948 00:02:27.769 00:17:53 -- pm/common@50 -- $ kill -TERM 657948 00:02:27.769 00:17:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.769 00:17:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:27.769 00:17:53 -- pm/common@44 -- $ pid=657983 00:02:27.769 00:17:53 -- pm/common@50 -- $ sudo -E kill -TERM 657983 00:02:27.769 00:17:53 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:27.769 00:17:53 -- nvmf/common.sh@7 -- # uname -s 00:02:27.769 00:17:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:27.769 00:17:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:27.769 00:17:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:27.769 00:17:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:27.769 00:17:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:27.769 00:17:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:27.769 00:17:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:27.769 00:17:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:27.769 00:17:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:27.769 00:17:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:27.769 00:17:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:27.769 00:17:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:27.769 00:17:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:27.769 00:17:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:27.769 00:17:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:27.769 00:17:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:27.769 00:17:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:27.769 00:17:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:27.769 00:17:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.769 00:17:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.769 00:17:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.769 00:17:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.769 00:17:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.769 00:17:53 -- paths/export.sh@5 -- # export PATH 00:02:27.769 00:17:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.769 00:17:53 -- nvmf/common.sh@47 -- # : 0 00:02:27.769 00:17:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:27.769 00:17:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:27.769 00:17:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:27.769 00:17:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:27.769 00:17:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:27.769 00:17:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:27.769 00:17:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:27.769 00:17:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:27.769 00:17:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:27.769 00:17:53 -- spdk/autotest.sh@32 -- # uname -s 00:02:27.769 00:17:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:27.769 00:17:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:27.769 00:17:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.769 00:17:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:27.769 00:17:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.769 00:17:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:27.769 00:17:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:27.769 00:17:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:27.769 00:17:53 -- spdk/autotest.sh@48 -- # udevadm_pid=712580 00:02:27.769 00:17:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:27.769 00:17:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:27.769 00:17:53 -- pm/common@17 -- # local monitor 00:02:27.769 00:17:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.769 00:17:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.769 00:17:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.769 00:17:53 -- pm/common@21 -- # date +%s 00:02:27.769 00:17:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.769 00:17:53 -- pm/common@21 -- # date +%s 00:02:27.769 00:17:53 -- pm/common@25 -- # sleep 1 00:02:27.769 00:17:53 -- pm/common@21 -- # date +%s 00:02:27.769 00:17:53 -- pm/common@21 -- # date +%s 00:02:27.769 00:17:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715725073 00:02:27.769 00:17:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715725073 00:02:27.769 00:17:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715725073 00:02:27.769 00:17:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715725073 00:02:27.769 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715725073_collect-vmstat.pm.log 00:02:27.769 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715725073_collect-cpu-load.pm.log 00:02:27.769 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715725073_collect-cpu-temp.pm.log 00:02:27.769 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715725073_collect-bmc-pm.bmc.pm.log 00:02:28.704 00:17:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:28.704 00:17:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:28.704 00:17:54 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:28.704 00:17:54 -- common/autotest_common.sh@10 -- # set +x 00:02:28.704 00:17:54 -- spdk/autotest.sh@59 -- # create_test_list 00:02:28.704 00:17:54 -- common/autotest_common.sh@745 -- # xtrace_disable 00:02:28.704 00:17:54 -- common/autotest_common.sh@10 -- # set +x 00:02:28.962 00:17:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:28.962 00:17:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.962 00:17:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.962 00:17:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:28.962 00:17:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.962 00:17:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:28.962 00:17:54 -- common/autotest_common.sh@1452 -- # uname 00:02:28.962 00:17:54 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:02:28.962 00:17:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:28.962 00:17:54 -- common/autotest_common.sh@1472 -- # uname 00:02:28.962 00:17:54 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:02:28.962 00:17:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:28.962 00:17:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:28.962 00:17:54 -- spdk/autotest.sh@72 -- # hash lcov 00:02:28.962 00:17:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:28.962 00:17:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:28.962 --rc lcov_branch_coverage=1 00:02:28.962 --rc lcov_function_coverage=1 00:02:28.962 --rc genhtml_branch_coverage=1 00:02:28.962 --rc genhtml_function_coverage=1 00:02:28.962 --rc genhtml_legend=1 00:02:28.962 --rc geninfo_all_blocks=1 00:02:28.962 ' 00:02:28.962 00:17:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:28.962 --rc lcov_branch_coverage=1 00:02:28.962 --rc lcov_function_coverage=1 00:02:28.962 --rc genhtml_branch_coverage=1 00:02:28.962 --rc genhtml_function_coverage=1 00:02:28.962 --rc genhtml_legend=1 00:02:28.962 --rc geninfo_all_blocks=1 00:02:28.962 ' 00:02:28.962 00:17:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:28.962 --rc lcov_branch_coverage=1 00:02:28.962 --rc lcov_function_coverage=1 00:02:28.962 --rc genhtml_branch_coverage=1 00:02:28.963 --rc genhtml_function_coverage=1 00:02:28.963 --rc genhtml_legend=1 00:02:28.963 --rc geninfo_all_blocks=1 00:02:28.963 --no-external' 00:02:28.963 00:17:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:28.963 --rc lcov_branch_coverage=1 00:02:28.963 --rc lcov_function_coverage=1 00:02:28.963 --rc genhtml_branch_coverage=1 00:02:28.963 --rc genhtml_function_coverage=1 00:02:28.963 --rc genhtml_legend=1 00:02:28.963 --rc geninfo_all_blocks=1 00:02:28.963 --no-external' 00:02:28.963 00:17:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:28.963 lcov: LCOV version 1.14 00:02:28.963 00:17:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:43.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:43.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:43.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:43.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:43.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:43.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:43.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:43.837 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:01.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:01.928 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:01.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:01.928 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:01.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:01.928 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:01.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:01.928 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:01.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:01.928 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:01.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:01.928 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:01.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:01.928 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:01.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:01.928 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:01.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:01.928 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:01.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:01.929 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:01.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:01.930 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:02.512 00:18:28 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:02.512 00:18:28 -- common/autotest_common.sh@721 -- # xtrace_disable 00:03:02.512 00:18:28 -- common/autotest_common.sh@10 -- # set +x 00:03:02.512 00:18:28 -- spdk/autotest.sh@91 -- # rm -f 00:03:02.512 00:18:28 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.888 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:03.888 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:03.888 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:03.888 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:03.888 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:03.888 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:03.888 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:03.888 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:03.888 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:03.888 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:03.888 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:03.888 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:03.888 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:03.888 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:03.888 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:04.147 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:04.147 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:04.147 00:18:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:04.147 00:18:30 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:04.147 00:18:30 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:04.147 00:18:30 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:04.147 00:18:30 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:04.147 00:18:30 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:04.147 00:18:30 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:04.147 00:18:30 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:04.147 00:18:30 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:04.148 00:18:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:04.148 00:18:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:04.148 00:18:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:04.148 00:18:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:04.148 00:18:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:04.148 00:18:30 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:04.148 No valid GPT data, bailing 00:03:04.148 00:18:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:04.148 00:18:30 -- scripts/common.sh@391 -- # pt= 00:03:04.148 00:18:30 -- scripts/common.sh@392 -- # return 1 00:03:04.148 00:18:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:04.148 1+0 records in 00:03:04.148 1+0 records out 00:03:04.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00273934 s, 383 MB/s 00:03:04.148 00:18:30 -- spdk/autotest.sh@118 -- # sync 00:03:04.148 00:18:30 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:04.148 00:18:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:04.148 00:18:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:06.047 00:18:32 -- spdk/autotest.sh@124 -- # uname -s 00:03:06.047 00:18:32 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:06.047 00:18:32 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:06.047 00:18:32 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:06.047 00:18:32 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:06.047 00:18:32 -- common/autotest_common.sh@10 -- # set +x 00:03:06.047 ************************************ 00:03:06.047 START TEST setup.sh 00:03:06.047 ************************************ 00:03:06.047 00:18:32 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:06.047 * Looking for test storage... 00:03:06.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:06.047 00:18:32 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:06.047 00:18:32 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:06.047 00:18:32 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:06.047 00:18:32 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:06.047 00:18:32 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:06.047 00:18:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:06.047 ************************************ 00:03:06.047 START TEST acl 00:03:06.047 ************************************ 00:03:06.047 00:18:32 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:06.310 * Looking for test storage... 00:03:06.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:06.310 00:18:32 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:06.310 00:18:32 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:06.310 00:18:32 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:06.310 00:18:32 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:06.310 00:18:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:06.310 00:18:32 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:06.310 00:18:32 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:06.310 00:18:32 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.310 00:18:32 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:06.310 00:18:32 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:06.310 00:18:32 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:06.310 00:18:32 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:06.310 00:18:32 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:06.310 00:18:32 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:06.310 00:18:32 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.310 00:18:32 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.214 00:18:33 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:08.214 00:18:33 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:08.214 00:18:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.214 00:18:33 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:08.214 00:18:33 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.214 00:18:33 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:09.150 Hugepages 00:03:09.150 node hugesize free / total 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:03:09.150 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.150 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.410 00:18:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:09.410 00:18:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:09.410 00:18:35 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:09.410 00:18:35 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:09.410 00:18:35 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:09.410 00:18:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.410 00:18:35 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:09.410 00:18:35 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:09.410 00:18:35 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:09.410 00:18:35 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:09.410 00:18:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:09.410 ************************************ 00:03:09.410 START TEST denied 00:03:09.410 ************************************ 00:03:09.410 00:18:35 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:03:09.410 00:18:35 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:09.410 00:18:35 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:09.410 00:18:35 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:09.410 00:18:35 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.410 00:18:35 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.789 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:10.789 00:18:36 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:10.789 00:18:36 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:10.789 00:18:36 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:10.789 00:18:36 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:10.789 00:18:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:10.789 00:18:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:10.789 00:18:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:10.789 00:18:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:10.789 00:18:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.789 00:18:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.326 00:03:13.326 real 0m4.110s 00:03:13.326 user 0m1.235s 00:03:13.326 sys 0m2.035s 00:03:13.326 00:18:39 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:13.326 00:18:39 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:13.326 ************************************ 00:03:13.326 END TEST denied 00:03:13.326 ************************************ 00:03:13.584 00:18:39 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:13.584 00:18:39 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:13.584 00:18:39 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:13.584 00:18:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:13.584 ************************************ 00:03:13.584 START TEST allowed 00:03:13.584 ************************************ 00:03:13.584 00:18:39 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:03:13.584 00:18:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:13.584 00:18:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:13.584 00:18:39 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:13.584 00:18:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.584 00:18:39 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.117 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:16.117 00:18:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:16.117 00:18:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:16.117 00:18:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:16.117 00:18:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.117 00:18:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.021 00:03:18.021 real 0m4.263s 00:03:18.021 user 0m1.186s 00:03:18.021 sys 0m1.963s 00:03:18.021 00:18:43 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:18.021 00:18:43 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:18.021 ************************************ 00:03:18.021 END TEST allowed 00:03:18.021 ************************************ 00:03:18.021 00:03:18.021 real 0m11.601s 00:03:18.021 user 0m3.751s 00:03:18.021 sys 0m5.993s 00:03:18.021 00:18:43 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:18.021 00:18:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.021 ************************************ 00:03:18.021 END TEST acl 00:03:18.021 ************************************ 00:03:18.021 00:18:43 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.021 00:18:43 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:18.021 00:18:43 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:18.021 00:18:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.021 ************************************ 00:03:18.021 START TEST hugepages 00:03:18.021 ************************************ 00:03:18.021 00:18:43 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.021 * Looking for test storage... 00:03:18.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35463792 kB' 'MemAvailable: 40150644 kB' 'Buffers: 2696 kB' 'Cached: 18464052 kB' 'SwapCached: 0 kB' 'Active: 14453444 kB' 'Inactive: 4470784 kB' 'Active(anon): 13864284 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460912 kB' 'Mapped: 187892 kB' 'Shmem: 13406804 kB' 'KReclaimable: 241060 kB' 'Slab: 633564 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392504 kB' 'KernelStack: 13024 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 14993312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198876 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.021 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.022 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.023 00:18:43 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:18.023 00:18:43 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:18.023 00:18:43 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:18.023 00:18:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.023 ************************************ 00:03:18.023 START TEST default_setup 00:03:18.023 ************************************ 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.023 00:18:43 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.399 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.399 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.399 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.399 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.399 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.399 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.399 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.399 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:19.399 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.399 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.399 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.399 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.399 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.399 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.399 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.399 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:20.368 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37573000 kB' 'MemAvailable: 42259852 kB' 'Buffers: 2696 kB' 'Cached: 18464156 kB' 'SwapCached: 0 kB' 'Active: 14473504 kB' 'Inactive: 4470784 kB' 'Active(anon): 13884344 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480704 kB' 'Mapped: 187768 kB' 'Shmem: 13406908 kB' 'KReclaimable: 241060 kB' 'Slab: 633240 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392180 kB' 'KernelStack: 13312 kB' 'PageTables: 9668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15016948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199180 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.368 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.369 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.634 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37579040 kB' 'MemAvailable: 42265892 kB' 'Buffers: 2696 kB' 'Cached: 18464156 kB' 'SwapCached: 0 kB' 'Active: 14472936 kB' 'Inactive: 4470784 kB' 'Active(anon): 13883776 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480100 kB' 'Mapped: 187888 kB' 'Shmem: 13406908 kB' 'KReclaimable: 241060 kB' 'Slab: 633224 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392164 kB' 'KernelStack: 12848 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15015972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.635 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37577688 kB' 'MemAvailable: 42264540 kB' 'Buffers: 2696 kB' 'Cached: 18464168 kB' 'SwapCached: 0 kB' 'Active: 14471420 kB' 'Inactive: 4470784 kB' 'Active(anon): 13882260 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478616 kB' 'Mapped: 187860 kB' 'Shmem: 13406920 kB' 'KReclaimable: 241060 kB' 'Slab: 633292 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392232 kB' 'KernelStack: 12928 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15015992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.638 nr_hugepages=1024 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.638 resv_hugepages=0 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.638 surplus_hugepages=0 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.638 anon_hugepages=0 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37577184 kB' 'MemAvailable: 42264036 kB' 'Buffers: 2696 kB' 'Cached: 18464168 kB' 'SwapCached: 0 kB' 'Active: 14471532 kB' 'Inactive: 4470784 kB' 'Active(anon): 13882372 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478728 kB' 'Mapped: 187860 kB' 'Shmem: 13406920 kB' 'KReclaimable: 241060 kB' 'Slab: 633292 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392232 kB' 'KernelStack: 12912 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15016012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198940 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20783672 kB' 'MemUsed: 12046212 kB' 'SwapCached: 0 kB' 'Active: 8543232 kB' 'Inactive: 188524 kB' 'Active(anon): 8147076 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 188524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8492896 kB' 'Mapped: 81724 kB' 'AnonPages: 241976 kB' 'Shmem: 7908216 kB' 'KernelStack: 6728 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118556 kB' 'Slab: 331456 kB' 'SReclaimable: 118556 kB' 'SUnreclaim: 212900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.641 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:20.642 node0=1024 expecting 1024 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:20.642 00:03:20.642 real 0m2.648s 00:03:20.642 user 0m0.691s 00:03:20.642 sys 0m0.971s 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:20.642 00:18:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:20.642 ************************************ 00:03:20.642 END TEST default_setup 00:03:20.642 ************************************ 00:03:20.642 00:18:46 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:20.642 00:18:46 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:20.642 00:18:46 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:20.642 00:18:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.642 ************************************ 00:03:20.642 START TEST per_node_1G_alloc 00:03:20.642 ************************************ 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.642 00:18:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.018 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.018 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.018 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.018 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.018 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.018 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.018 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.018 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.018 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.018 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.018 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.018 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.018 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.018 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.018 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.018 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.018 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.018 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:22.018 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:22.018 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.018 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.018 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.018 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37574968 kB' 'MemAvailable: 42261820 kB' 'Buffers: 2696 kB' 'Cached: 18464268 kB' 'SwapCached: 0 kB' 'Active: 14477160 kB' 'Inactive: 4470784 kB' 'Active(anon): 13888000 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484096 kB' 'Mapped: 188296 kB' 'Shmem: 13407020 kB' 'KReclaimable: 241060 kB' 'Slab: 633504 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392444 kB' 'KernelStack: 12928 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15022184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199088 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.019 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.020 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37581828 kB' 'MemAvailable: 42268680 kB' 'Buffers: 2696 kB' 'Cached: 18464268 kB' 'SwapCached: 0 kB' 'Active: 14478496 kB' 'Inactive: 4470784 kB' 'Active(anon): 13889336 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485484 kB' 'Mapped: 188820 kB' 'Shmem: 13407020 kB' 'KReclaimable: 241060 kB' 'Slab: 633552 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392492 kB' 'KernelStack: 12960 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15022204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199040 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.312 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.313 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37587876 kB' 'MemAvailable: 42274728 kB' 'Buffers: 2696 kB' 'Cached: 18464272 kB' 'SwapCached: 0 kB' 'Active: 14471888 kB' 'Inactive: 4470784 kB' 'Active(anon): 13882728 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478888 kB' 'Mapped: 188368 kB' 'Shmem: 13407024 kB' 'KReclaimable: 241060 kB' 'Slab: 633544 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392484 kB' 'KernelStack: 12944 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15016104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199052 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.314 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.315 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.316 nr_hugepages=1024 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.316 resv_hugepages=0 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.316 surplus_hugepages=0 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.316 anon_hugepages=0 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37588272 kB' 'MemAvailable: 42275124 kB' 'Buffers: 2696 kB' 'Cached: 18464308 kB' 'SwapCached: 0 kB' 'Active: 14471964 kB' 'Inactive: 4470784 kB' 'Active(anon): 13882804 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478872 kB' 'Mapped: 187876 kB' 'Shmem: 13407060 kB' 'KReclaimable: 241060 kB' 'Slab: 633536 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392476 kB' 'KernelStack: 12912 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 15016128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199052 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.316 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.317 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21832324 kB' 'MemUsed: 10997560 kB' 'SwapCached: 0 kB' 'Active: 8543176 kB' 'Inactive: 188524 kB' 'Active(anon): 8147020 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 188524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8492896 kB' 'Mapped: 81740 kB' 'AnonPages: 241952 kB' 'Shmem: 7908216 kB' 'KernelStack: 6760 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118556 kB' 'Slab: 331756 kB' 'SReclaimable: 118556 kB' 'SUnreclaim: 213200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.318 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.319 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15754944 kB' 'MemUsed: 11956900 kB' 'SwapCached: 0 kB' 'Active: 5928972 kB' 'Inactive: 4282260 kB' 'Active(anon): 5735968 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4282260 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9974160 kB' 'Mapped: 106136 kB' 'AnonPages: 237148 kB' 'Shmem: 5498896 kB' 'KernelStack: 6216 kB' 'PageTables: 4596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122504 kB' 'Slab: 301780 kB' 'SReclaimable: 122504 kB' 'SUnreclaim: 179276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.320 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.321 node0=512 expecting 512 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.321 node1=512 expecting 512 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.321 00:03:22.321 real 0m1.613s 00:03:22.321 user 0m0.699s 00:03:22.321 sys 0m0.883s 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:22.321 00:18:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.321 ************************************ 00:03:22.321 END TEST per_node_1G_alloc 00:03:22.321 ************************************ 00:03:22.321 00:18:48 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:22.321 00:18:48 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:22.321 00:18:48 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:22.321 00:18:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.321 ************************************ 00:03:22.321 START TEST even_2G_alloc 00:03:22.321 ************************************ 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.321 00:18:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.695 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.695 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.695 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.695 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.695 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.695 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.695 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.695 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.695 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.695 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.695 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.695 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.695 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.695 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.695 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.695 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.695 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37609824 kB' 'MemAvailable: 42296676 kB' 'Buffers: 2696 kB' 'Cached: 18464408 kB' 'SwapCached: 0 kB' 'Active: 14466960 kB' 'Inactive: 4470784 kB' 'Active(anon): 13877800 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473788 kB' 'Mapped: 187052 kB' 'Shmem: 13407160 kB' 'KReclaimable: 241060 kB' 'Slab: 633244 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392184 kB' 'KernelStack: 12960 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14992780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.695 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.696 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37609092 kB' 'MemAvailable: 42295944 kB' 'Buffers: 2696 kB' 'Cached: 18464408 kB' 'SwapCached: 0 kB' 'Active: 14467636 kB' 'Inactive: 4470784 kB' 'Active(anon): 13878476 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474552 kB' 'Mapped: 187128 kB' 'Shmem: 13407160 kB' 'KReclaimable: 241060 kB' 'Slab: 633180 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392120 kB' 'KernelStack: 13152 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14992800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199036 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.697 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.962 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37607156 kB' 'MemAvailable: 42294008 kB' 'Buffers: 2696 kB' 'Cached: 18464428 kB' 'SwapCached: 0 kB' 'Active: 14467536 kB' 'Inactive: 4470784 kB' 'Active(anon): 13878376 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474356 kB' 'Mapped: 187044 kB' 'Shmem: 13407180 kB' 'KReclaimable: 241060 kB' 'Slab: 633080 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392020 kB' 'KernelStack: 13232 kB' 'PageTables: 9456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14992820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199196 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.963 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.964 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.965 nr_hugepages=1024 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.965 resv_hugepages=0 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.965 surplus_hugepages=0 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.965 anon_hugepages=0 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37605464 kB' 'MemAvailable: 42292316 kB' 'Buffers: 2696 kB' 'Cached: 18464432 kB' 'SwapCached: 0 kB' 'Active: 14467616 kB' 'Inactive: 4470784 kB' 'Active(anon): 13878456 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474432 kB' 'Mapped: 187044 kB' 'Shmem: 13407184 kB' 'KReclaimable: 241060 kB' 'Slab: 633072 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392012 kB' 'KernelStack: 13200 kB' 'PageTables: 9800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14990484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199052 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.965 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.966 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21838580 kB' 'MemUsed: 10991304 kB' 'SwapCached: 0 kB' 'Active: 8541916 kB' 'Inactive: 188524 kB' 'Active(anon): 8145760 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 188524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8492908 kB' 'Mapped: 80980 kB' 'AnonPages: 240660 kB' 'Shmem: 7908228 kB' 'KernelStack: 6744 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118556 kB' 'Slab: 331480 kB' 'SReclaimable: 118556 kB' 'SUnreclaim: 212924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.967 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.968 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15767252 kB' 'MemUsed: 11944592 kB' 'SwapCached: 0 kB' 'Active: 5924456 kB' 'Inactive: 4282260 kB' 'Active(anon): 5731452 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4282260 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9974260 kB' 'Mapped: 106048 kB' 'AnonPages: 232480 kB' 'Shmem: 5498996 kB' 'KernelStack: 6008 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122504 kB' 'Slab: 301592 kB' 'SReclaimable: 122504 kB' 'SUnreclaim: 179088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.969 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.970 node0=512 expecting 512 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:23.970 node1=512 expecting 512 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:23.970 00:03:23.970 real 0m1.622s 00:03:23.970 user 0m0.705s 00:03:23.970 sys 0m0.884s 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:23.970 00:18:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.970 ************************************ 00:03:23.970 END TEST even_2G_alloc 00:03:23.970 ************************************ 00:03:23.970 00:18:49 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:23.970 00:18:49 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:23.970 00:18:49 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:23.970 00:18:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.970 ************************************ 00:03:23.970 START TEST odd_alloc 00:03:23.970 ************************************ 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.970 00:18:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.349 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.349 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.349 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.349 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.349 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.349 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.349 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.349 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.349 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.349 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.349 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.349 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.349 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.349 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.349 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.349 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.349 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.349 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:25.349 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37576744 kB' 'MemAvailable: 42263596 kB' 'Buffers: 2696 kB' 'Cached: 18464540 kB' 'SwapCached: 0 kB' 'Active: 14465472 kB' 'Inactive: 4470784 kB' 'Active(anon): 13876312 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472180 kB' 'Mapped: 187076 kB' 'Shmem: 13407292 kB' 'KReclaimable: 241060 kB' 'Slab: 632932 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 391872 kB' 'KernelStack: 12848 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14990552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.350 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37581032 kB' 'MemAvailable: 42267884 kB' 'Buffers: 2696 kB' 'Cached: 18464544 kB' 'SwapCached: 0 kB' 'Active: 14465820 kB' 'Inactive: 4470784 kB' 'Active(anon): 13876660 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472548 kB' 'Mapped: 187076 kB' 'Shmem: 13407296 kB' 'KReclaimable: 241060 kB' 'Slab: 632932 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 391872 kB' 'KernelStack: 12880 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14990568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.351 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.352 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37581032 kB' 'MemAvailable: 42267884 kB' 'Buffers: 2696 kB' 'Cached: 18464548 kB' 'SwapCached: 0 kB' 'Active: 14466976 kB' 'Inactive: 4470784 kB' 'Active(anon): 13877816 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473740 kB' 'Mapped: 187512 kB' 'Shmem: 13407300 kB' 'KReclaimable: 241060 kB' 'Slab: 632908 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 391848 kB' 'KernelStack: 12848 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14992080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.353 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:25.617 nr_hugepages=1025 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.617 resv_hugepages=0 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.617 surplus_hugepages=0 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.617 anon_hugepages=0 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37580956 kB' 'MemAvailable: 42267808 kB' 'Buffers: 2696 kB' 'Cached: 18464548 kB' 'SwapCached: 0 kB' 'Active: 14469892 kB' 'Inactive: 4470784 kB' 'Active(anon): 13880732 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476640 kB' 'Mapped: 187488 kB' 'Shmem: 13407300 kB' 'KReclaimable: 241060 kB' 'Slab: 632976 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 391916 kB' 'KernelStack: 12880 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14995400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21809212 kB' 'MemUsed: 11020672 kB' 'SwapCached: 0 kB' 'Active: 8546712 kB' 'Inactive: 188524 kB' 'Active(anon): 8150556 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 188524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8492972 kB' 'Mapped: 81152 kB' 'AnonPages: 245396 kB' 'Shmem: 7908292 kB' 'KernelStack: 6792 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118556 kB' 'Slab: 331356 kB' 'SReclaimable: 118556 kB' 'SUnreclaim: 212800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15769172 kB' 'MemUsed: 11942672 kB' 'SwapCached: 0 kB' 'Active: 5924152 kB' 'Inactive: 4282260 kB' 'Active(anon): 5731148 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4282260 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9974320 kB' 'Mapped: 106816 kB' 'AnonPages: 232188 kB' 'Shmem: 5499056 kB' 'KernelStack: 6072 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122504 kB' 'Slab: 301620 kB' 'SReclaimable: 122504 kB' 'SUnreclaim: 179116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:25.622 node0=512 expecting 513 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:25.622 node1=513 expecting 512 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:25.622 00:03:25.622 real 0m1.566s 00:03:25.622 user 0m0.686s 00:03:25.622 sys 0m0.846s 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:25.622 00:18:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.622 ************************************ 00:03:25.622 END TEST odd_alloc 00:03:25.622 ************************************ 00:03:25.622 00:18:51 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:25.622 00:18:51 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:25.622 00:18:51 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:25.622 00:18:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.622 ************************************ 00:03:25.622 START TEST custom_alloc 00:03:25.622 ************************************ 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.622 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:25.623 00:18:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:25.623 00:18:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.623 00:18:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.003 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.003 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.003 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.003 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.003 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.003 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.003 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.003 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.003 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.003 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.003 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.003 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.003 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.003 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.003 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.003 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.003 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.003 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36506200 kB' 'MemAvailable: 41193052 kB' 'Buffers: 2696 kB' 'Cached: 18464676 kB' 'SwapCached: 0 kB' 'Active: 14465944 kB' 'Inactive: 4470784 kB' 'Active(anon): 13876784 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472596 kB' 'Mapped: 187160 kB' 'Shmem: 13407428 kB' 'KReclaimable: 241060 kB' 'Slab: 632996 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 391936 kB' 'KernelStack: 12880 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14990820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.004 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36506352 kB' 'MemAvailable: 41193204 kB' 'Buffers: 2696 kB' 'Cached: 18464676 kB' 'SwapCached: 0 kB' 'Active: 14466764 kB' 'Inactive: 4470784 kB' 'Active(anon): 13877604 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473388 kB' 'Mapped: 187160 kB' 'Shmem: 13407428 kB' 'KReclaimable: 241060 kB' 'Slab: 632972 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 391912 kB' 'KernelStack: 12896 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14990836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.005 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.006 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36506352 kB' 'MemAvailable: 41193204 kB' 'Buffers: 2696 kB' 'Cached: 18464684 kB' 'SwapCached: 0 kB' 'Active: 14466080 kB' 'Inactive: 4470784 kB' 'Active(anon): 13876920 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472712 kB' 'Mapped: 187140 kB' 'Shmem: 13407436 kB' 'KReclaimable: 241060 kB' 'Slab: 632968 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 391908 kB' 'KernelStack: 12880 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14990860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.007 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.270 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.271 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:27.272 nr_hugepages=1536 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.272 resv_hugepages=0 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.272 surplus_hugepages=0 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.272 anon_hugepages=0 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36506096 kB' 'MemAvailable: 41192948 kB' 'Buffers: 2696 kB' 'Cached: 18464704 kB' 'SwapCached: 0 kB' 'Active: 14465544 kB' 'Inactive: 4470784 kB' 'Active(anon): 13876384 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472124 kB' 'Mapped: 187060 kB' 'Shmem: 13407456 kB' 'KReclaimable: 241060 kB' 'Slab: 632956 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 391896 kB' 'KernelStack: 12880 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14993240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.272 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:27.273 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21793280 kB' 'MemUsed: 11036604 kB' 'SwapCached: 0 kB' 'Active: 8542488 kB' 'Inactive: 188524 kB' 'Active(anon): 8146332 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 188524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8493064 kB' 'Mapped: 81012 kB' 'AnonPages: 241060 kB' 'Shmem: 7908384 kB' 'KernelStack: 7176 kB' 'PageTables: 5052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118556 kB' 'Slab: 331416 kB' 'SReclaimable: 118556 kB' 'SUnreclaim: 212860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.274 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 14712244 kB' 'MemUsed: 12999600 kB' 'SwapCached: 0 kB' 'Active: 5924916 kB' 'Inactive: 4282260 kB' 'Active(anon): 5731912 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4282260 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9974336 kB' 'Mapped: 106048 kB' 'AnonPages: 232876 kB' 'Shmem: 5499072 kB' 'KernelStack: 6248 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122504 kB' 'Slab: 301528 kB' 'SReclaimable: 122504 kB' 'SUnreclaim: 179024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.275 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.276 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.277 node0=512 expecting 512 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:27.277 node1=1024 expecting 1024 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:27.277 00:03:27.277 real 0m1.615s 00:03:27.277 user 0m0.710s 00:03:27.277 sys 0m0.871s 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:27.277 00:18:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:27.277 ************************************ 00:03:27.277 END TEST custom_alloc 00:03:27.277 ************************************ 00:03:27.277 00:18:53 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:27.277 00:18:53 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:27.277 00:18:53 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:27.277 00:18:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.277 ************************************ 00:03:27.277 START TEST no_shrink_alloc 00:03:27.277 ************************************ 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.277 00:18:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.651 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.651 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.651 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.651 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.651 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.651 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.651 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.651 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.651 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.651 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:28.651 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:28.651 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:28.651 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:28.651 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:28.651 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:28.651 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:28.651 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37541776 kB' 'MemAvailable: 42228628 kB' 'Buffers: 2696 kB' 'Cached: 18464948 kB' 'SwapCached: 0 kB' 'Active: 14466416 kB' 'Inactive: 4470784 kB' 'Active(anon): 13877256 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472816 kB' 'Mapped: 187160 kB' 'Shmem: 13407700 kB' 'KReclaimable: 241060 kB' 'Slab: 633188 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392128 kB' 'KernelStack: 12928 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14991352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.651 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.652 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37542648 kB' 'MemAvailable: 42229500 kB' 'Buffers: 2696 kB' 'Cached: 18464948 kB' 'SwapCached: 0 kB' 'Active: 14466728 kB' 'Inactive: 4470784 kB' 'Active(anon): 13877568 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473132 kB' 'Mapped: 187160 kB' 'Shmem: 13407700 kB' 'KReclaimable: 241060 kB' 'Slab: 633188 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392128 kB' 'KernelStack: 12896 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14991368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.653 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.915 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.916 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37554352 kB' 'MemAvailable: 42241204 kB' 'Buffers: 2696 kB' 'Cached: 18464952 kB' 'SwapCached: 0 kB' 'Active: 14466464 kB' 'Inactive: 4470784 kB' 'Active(anon): 13877304 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472852 kB' 'Mapped: 187076 kB' 'Shmem: 13407704 kB' 'KReclaimable: 241060 kB' 'Slab: 633076 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392016 kB' 'KernelStack: 12944 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14991024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.917 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.918 nr_hugepages=1024 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.918 resv_hugepages=0 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.918 surplus_hugepages=0 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.918 anon_hugepages=0 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.918 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37554796 kB' 'MemAvailable: 42241648 kB' 'Buffers: 2696 kB' 'Cached: 18464968 kB' 'SwapCached: 0 kB' 'Active: 14465412 kB' 'Inactive: 4470784 kB' 'Active(anon): 13876252 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471756 kB' 'Mapped: 187076 kB' 'Shmem: 13407720 kB' 'KReclaimable: 241060 kB' 'Slab: 633076 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392016 kB' 'KernelStack: 12864 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14991044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198908 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.919 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.921 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20749796 kB' 'MemUsed: 12080088 kB' 'SwapCached: 0 kB' 'Active: 8540496 kB' 'Inactive: 188524 kB' 'Active(anon): 8144340 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 188524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8493152 kB' 'Mapped: 81024 kB' 'AnonPages: 238952 kB' 'Shmem: 7908472 kB' 'KernelStack: 6856 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118556 kB' 'Slab: 331452 kB' 'SReclaimable: 118556 kB' 'SUnreclaim: 212896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.922 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.923 node0=1024 expecting 1024 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.923 00:18:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.301 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.301 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.301 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.301 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.301 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.301 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.301 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.301 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.301 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.301 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.301 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.301 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.301 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.301 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.301 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.301 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.301 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.301 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37535620 kB' 'MemAvailable: 42222472 kB' 'Buffers: 2696 kB' 'Cached: 18465068 kB' 'SwapCached: 0 kB' 'Active: 14466688 kB' 'Inactive: 4470784 kB' 'Active(anon): 13877528 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472964 kB' 'Mapped: 187088 kB' 'Shmem: 13407820 kB' 'KReclaimable: 241060 kB' 'Slab: 633056 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 391996 kB' 'KernelStack: 12896 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14991600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:30.301 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.302 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37539164 kB' 'MemAvailable: 42226016 kB' 'Buffers: 2696 kB' 'Cached: 18465068 kB' 'SwapCached: 0 kB' 'Active: 14465856 kB' 'Inactive: 4470784 kB' 'Active(anon): 13876696 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472128 kB' 'Mapped: 187080 kB' 'Shmem: 13407820 kB' 'KReclaimable: 241060 kB' 'Slab: 633124 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392064 kB' 'KernelStack: 12944 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14991620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.303 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.304 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37539488 kB' 'MemAvailable: 42226340 kB' 'Buffers: 2696 kB' 'Cached: 18465088 kB' 'SwapCached: 0 kB' 'Active: 14466184 kB' 'Inactive: 4470784 kB' 'Active(anon): 13877024 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472476 kB' 'Mapped: 187080 kB' 'Shmem: 13407840 kB' 'KReclaimable: 241060 kB' 'Slab: 633116 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392056 kB' 'KernelStack: 12960 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14991640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.305 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.306 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.307 nr_hugepages=1024 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.307 resv_hugepages=0 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.307 surplus_hugepages=0 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.307 anon_hugepages=0 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37539168 kB' 'MemAvailable: 42226020 kB' 'Buffers: 2696 kB' 'Cached: 18465108 kB' 'SwapCached: 0 kB' 'Active: 14466508 kB' 'Inactive: 4470784 kB' 'Active(anon): 13877348 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472796 kB' 'Mapped: 187080 kB' 'Shmem: 13407860 kB' 'KReclaimable: 241060 kB' 'Slab: 633116 kB' 'SReclaimable: 241060 kB' 'SUnreclaim: 392056 kB' 'KernelStack: 12976 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14991664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2715228 kB' 'DirectMap2M: 19224576 kB' 'DirectMap1G: 47185920 kB' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.307 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.308 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20741884 kB' 'MemUsed: 12088000 kB' 'SwapCached: 0 kB' 'Active: 8540972 kB' 'Inactive: 188524 kB' 'Active(anon): 8144816 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 188524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8493160 kB' 'Mapped: 81032 kB' 'AnonPages: 239500 kB' 'Shmem: 7908480 kB' 'KernelStack: 6856 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118556 kB' 'Slab: 331496 kB' 'SReclaimable: 118556 kB' 'SUnreclaim: 212940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.309 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.310 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.311 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.311 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.311 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.311 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.311 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.311 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.311 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.311 node0=1024 expecting 1024 00:03:30.311 00:18:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.311 00:03:30.311 real 0m3.133s 00:03:30.311 user 0m1.323s 00:03:30.311 sys 0m1.746s 00:03:30.311 00:18:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:30.311 00:18:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.311 ************************************ 00:03:30.311 END TEST no_shrink_alloc 00:03:30.311 ************************************ 00:03:30.311 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:30.311 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:30.311 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.311 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.311 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.311 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.311 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.569 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.569 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.569 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.569 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.569 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.569 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:30.569 00:18:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:30.569 00:03:30.569 real 0m12.610s 00:03:30.569 user 0m4.984s 00:03:30.569 sys 0m6.455s 00:03:30.569 00:18:56 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:30.569 00:18:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.569 ************************************ 00:03:30.569 END TEST hugepages 00:03:30.569 ************************************ 00:03:30.569 00:18:56 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:30.569 00:18:56 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:30.569 00:18:56 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:30.569 00:18:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.569 ************************************ 00:03:30.569 START TEST driver 00:03:30.569 ************************************ 00:03:30.569 00:18:56 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:30.569 * Looking for test storage... 00:03:30.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.569 00:18:56 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:30.569 00:18:56 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.569 00:18:56 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.098 00:18:59 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:33.098 00:18:59 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:33.098 00:18:59 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:33.098 00:18:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:33.098 ************************************ 00:03:33.098 START TEST guess_driver 00:03:33.098 ************************************ 00:03:33.098 00:18:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:03:33.098 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:33.098 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:33.098 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:33.098 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:33.098 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:33.098 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:33.099 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.099 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.099 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.099 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.099 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:33.099 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:33.099 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:33.099 Looking for driver=vfio-pci 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.099 00:18:59 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 00:19:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.408 00:19:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.408 00:19:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.408 00:19:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.666 00:19:01 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:35.666 00:19:01 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:35.666 00:19:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.666 00:19:01 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.192 00:03:38.192 real 0m5.044s 00:03:38.192 user 0m1.168s 00:03:38.192 sys 0m2.057s 00:03:38.192 00:19:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:38.192 00:19:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.192 ************************************ 00:03:38.192 END TEST guess_driver 00:03:38.192 ************************************ 00:03:38.192 00:03:38.192 real 0m7.662s 00:03:38.192 user 0m1.834s 00:03:38.192 sys 0m3.153s 00:03:38.192 00:19:04 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:38.192 00:19:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.192 ************************************ 00:03:38.192 END TEST driver 00:03:38.192 ************************************ 00:03:38.192 00:19:04 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:38.192 00:19:04 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:38.192 00:19:04 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:38.192 00:19:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.192 ************************************ 00:03:38.192 START TEST devices 00:03:38.192 ************************************ 00:03:38.192 00:19:04 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:38.192 * Looking for test storage... 00:03:38.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:38.192 00:19:04 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:38.192 00:19:04 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:38.192 00:19:04 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.192 00:19:04 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:40.094 00:19:05 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:40.094 00:19:05 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:40.094 00:19:05 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:40.094 00:19:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:40.094 00:19:05 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:40.094 00:19:05 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:40.094 00:19:05 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.094 00:19:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:40.094 00:19:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:40.094 00:19:05 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:40.094 No valid GPT data, bailing 00:03:40.094 00:19:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.094 00:19:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.094 00:19:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.094 00:19:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:40.094 00:19:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:40.094 00:19:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:40.095 00:19:05 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:40.095 00:19:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:40.095 00:19:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.095 00:19:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:40.095 00:19:05 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:40.095 00:19:05 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:40.095 00:19:05 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:40.095 00:19:05 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:40.095 00:19:05 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:40.095 00:19:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.095 ************************************ 00:03:40.095 START TEST nvme_mount 00:03:40.095 ************************************ 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:40.095 00:19:05 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:41.031 Creating new GPT entries in memory. 00:03:41.031 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:41.031 other utilities. 00:03:41.031 00:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:41.031 00:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.031 00:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.031 00:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.031 00:19:06 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:41.996 Creating new GPT entries in memory. 00:03:41.996 The operation has completed successfully. 00:03:41.996 00:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:41.996 00:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.996 00:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 735282 00:03:41.996 00:19:07 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.996 00:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:41.996 00:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.996 00:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:41.996 00:19:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.996 00:19:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.371 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.372 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:43.372 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.372 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.372 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.372 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:43.372 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.372 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.372 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.372 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:43.372 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:43.372 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:43.372 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:43.630 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:43.630 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:43.630 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:43.630 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:43.630 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:43.630 00:19:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:43.630 00:19:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.630 00:19:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:43.630 00:19:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.889 00:19:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.265 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.266 00:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.643 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:46.644 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.903 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.903 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:46.903 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:46.903 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:46.903 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.903 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.903 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.903 00:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.903 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.903 00:03:46.903 real 0m6.876s 00:03:46.903 user 0m1.711s 00:03:46.903 sys 0m2.787s 00:03:46.903 00:19:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:46.903 00:19:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:46.903 ************************************ 00:03:46.903 END TEST nvme_mount 00:03:46.903 ************************************ 00:03:46.903 00:19:12 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:46.903 00:19:12 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:46.903 00:19:12 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:46.903 00:19:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.903 ************************************ 00:03:46.903 START TEST dm_mount 00:03:46.903 ************************************ 00:03:46.903 00:19:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:03:46.903 00:19:12 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:46.903 00:19:12 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:46.903 00:19:12 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:46.903 00:19:12 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:46.903 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:46.903 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:46.903 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:46.903 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:46.903 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:46.904 00:19:12 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:47.841 Creating new GPT entries in memory. 00:03:47.841 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:47.841 other utilities. 00:03:47.841 00:19:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:47.841 00:19:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.841 00:19:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.841 00:19:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.841 00:19:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:48.779 Creating new GPT entries in memory. 00:03:48.779 The operation has completed successfully. 00:03:48.779 00:19:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.779 00:19:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.779 00:19:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:48.779 00:19:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:48.779 00:19:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:50.158 The operation has completed successfully. 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 737968 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.158 00:19:15 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:50.158 00:19:16 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.158 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:50.158 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:50.158 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:50.158 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.158 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:50.159 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.159 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:50.159 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:50.159 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.159 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.159 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:50.159 00:19:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.159 00:19:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.159 00:19:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.132 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:51.133 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.391 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.391 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.392 00:19:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:52.768 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.768 00:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:53.027 00:03:53.027 real 0m6.058s 00:03:53.027 user 0m1.139s 00:03:53.027 sys 0m1.805s 00:03:53.027 00:19:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:53.027 00:19:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:53.027 ************************************ 00:03:53.027 END TEST dm_mount 00:03:53.027 ************************************ 00:03:53.027 00:19:18 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:53.027 00:19:18 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:53.027 00:19:18 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.027 00:19:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.027 00:19:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:53.027 00:19:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.027 00:19:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:53.286 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:53.286 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:53.286 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:53.286 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:53.286 00:19:19 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:53.286 00:19:19 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.286 00:19:19 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:53.286 00:19:19 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.286 00:19:19 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:53.286 00:19:19 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.286 00:19:19 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:53.286 00:03:53.286 real 0m14.996s 00:03:53.286 user 0m3.577s 00:03:53.286 sys 0m5.698s 00:03:53.286 00:19:19 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:53.286 00:19:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:53.286 ************************************ 00:03:53.286 END TEST devices 00:03:53.286 ************************************ 00:03:53.286 00:03:53.286 real 0m47.126s 00:03:53.286 user 0m14.254s 00:03:53.286 sys 0m21.456s 00:03:53.286 00:19:19 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:53.286 00:19:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.286 ************************************ 00:03:53.286 END TEST setup.sh 00:03:53.286 ************************************ 00:03:53.286 00:19:19 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:54.660 Hugepages 00:03:54.660 node hugesize free / total 00:03:54.660 node0 1048576kB 0 / 0 00:03:54.660 node0 2048kB 2048 / 2048 00:03:54.660 node1 1048576kB 0 / 0 00:03:54.660 node1 2048kB 0 / 0 00:03:54.660 00:03:54.660 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:54.660 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:54.660 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:54.660 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:54.660 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:54.660 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:54.660 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:54.660 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:54.660 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:54.660 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:54.660 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:54.660 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:54.660 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:54.660 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:54.660 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:54.660 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:54.660 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:54.660 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:54.660 00:19:20 -- spdk/autotest.sh@130 -- # uname -s 00:03:54.660 00:19:20 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:54.660 00:19:20 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:54.660 00:19:20 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.036 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.036 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.036 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.036 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.036 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.036 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.036 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.036 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.036 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.036 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.036 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.036 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.036 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.036 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.036 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.036 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.973 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:56.973 00:19:23 -- common/autotest_common.sh@1529 -- # sleep 1 00:03:58.355 00:19:24 -- common/autotest_common.sh@1530 -- # bdfs=() 00:03:58.355 00:19:24 -- common/autotest_common.sh@1530 -- # local bdfs 00:03:58.355 00:19:24 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:03:58.355 00:19:24 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:03:58.355 00:19:24 -- common/autotest_common.sh@1510 -- # bdfs=() 00:03:58.355 00:19:24 -- common/autotest_common.sh@1510 -- # local bdfs 00:03:58.355 00:19:24 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:58.355 00:19:24 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:58.355 00:19:24 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:03:58.355 00:19:24 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:03:58.355 00:19:24 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:88:00.0 00:03:58.355 00:19:24 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.290 Waiting for block devices as requested 00:03:59.549 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:59.549 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:59.549 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:59.549 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:59.807 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:59.807 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:59.807 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:59.807 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:00.065 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:00.065 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:00.065 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:00.065 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:00.323 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:00.323 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:00.323 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:00.323 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:00.580 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:00.580 00:19:26 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:04:00.580 00:19:26 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:00.580 00:19:26 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 00:04:00.580 00:19:26 -- common/autotest_common.sh@1499 -- # grep 0000:88:00.0/nvme/nvme 00:04:00.580 00:19:26 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:00.580 00:19:26 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:00.580 00:19:26 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:00.580 00:19:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:04:00.580 00:19:26 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:04:00.580 00:19:26 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:04:00.580 00:19:26 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:04:00.580 00:19:26 -- common/autotest_common.sh@1542 -- # grep oacs 00:04:00.580 00:19:26 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:04:00.580 00:19:26 -- common/autotest_common.sh@1542 -- # oacs=' 0xf' 00:04:00.580 00:19:26 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:04:00.580 00:19:26 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:04:00.580 00:19:26 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:04:00.580 00:19:26 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:04:00.580 00:19:26 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:04:00.580 00:19:26 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:04:00.580 00:19:26 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:04:00.580 00:19:26 -- common/autotest_common.sh@1554 -- # continue 00:04:00.580 00:19:26 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:00.580 00:19:26 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:00.580 00:19:26 -- common/autotest_common.sh@10 -- # set +x 00:04:00.580 00:19:26 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:00.580 00:19:26 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:00.580 00:19:26 -- common/autotest_common.sh@10 -- # set +x 00:04:00.580 00:19:26 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.955 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:01.955 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:01.955 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:01.955 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:01.955 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:01.955 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:01.955 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:01.955 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:01.955 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:01.955 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:01.955 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:01.955 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:01.955 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:01.955 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:01.955 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:01.955 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:03.333 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:03.333 00:19:29 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:03.333 00:19:29 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:03.333 00:19:29 -- common/autotest_common.sh@10 -- # set +x 00:04:03.333 00:19:29 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:03.333 00:19:29 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:04:03.333 00:19:29 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.333 00:19:29 -- common/autotest_common.sh@1574 -- # bdfs=() 00:04:03.333 00:19:29 -- common/autotest_common.sh@1574 -- # local bdfs 00:04:03.333 00:19:29 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:04:03.333 00:19:29 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:03.333 00:19:29 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:03.333 00:19:29 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.333 00:19:29 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:03.333 00:19:29 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:03.333 00:19:29 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:03.333 00:19:29 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:88:00.0 00:04:03.333 00:19:29 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:04:03.333 00:19:29 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:03.333 00:19:29 -- common/autotest_common.sh@1577 -- # device=0x0a54 00:04:03.333 00:19:29 -- common/autotest_common.sh@1578 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:03.333 00:19:29 -- common/autotest_common.sh@1579 -- # bdfs+=($bdf) 00:04:03.333 00:19:29 -- common/autotest_common.sh@1583 -- # printf '%s\n' 0000:88:00.0 00:04:03.333 00:19:29 -- common/autotest_common.sh@1589 -- # [[ -z 0000:88:00.0 ]] 00:04:03.333 00:19:29 -- common/autotest_common.sh@1594 -- # spdk_tgt_pid=743859 00:04:03.333 00:19:29 -- common/autotest_common.sh@1593 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.333 00:19:29 -- common/autotest_common.sh@1595 -- # waitforlisten 743859 00:04:03.333 00:19:29 -- common/autotest_common.sh@828 -- # '[' -z 743859 ']' 00:04:03.333 00:19:29 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.333 00:19:29 -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:03.333 00:19:29 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.333 00:19:29 -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:03.333 00:19:29 -- common/autotest_common.sh@10 -- # set +x 00:04:03.333 [2024-05-15 00:19:29.372119] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:03.333 [2024-05-15 00:19:29.372222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743859 ] 00:04:03.333 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.333 [2024-05-15 00:19:29.445829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.591 [2024-05-15 00:19:29.562645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.525 00:19:30 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:04.525 00:19:30 -- common/autotest_common.sh@861 -- # return 0 00:04:04.525 00:19:30 -- common/autotest_common.sh@1597 -- # bdf_id=0 00:04:04.525 00:19:30 -- common/autotest_common.sh@1598 -- # for bdf in "${bdfs[@]}" 00:04:04.525 00:19:30 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:07.835 nvme0n1 00:04:07.835 00:19:33 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:07.835 [2024-05-15 00:19:33.652921] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:07.835 [2024-05-15 00:19:33.652987] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:07.835 request: 00:04:07.835 { 00:04:07.835 "nvme_ctrlr_name": "nvme0", 00:04:07.835 "password": "test", 00:04:07.835 "method": "bdev_nvme_opal_revert", 00:04:07.835 "req_id": 1 00:04:07.835 } 00:04:07.835 Got JSON-RPC error response 00:04:07.835 response: 00:04:07.835 { 00:04:07.835 "code": -32603, 00:04:07.835 "message": "Internal error" 00:04:07.835 } 00:04:07.835 00:19:33 -- common/autotest_common.sh@1601 -- # true 00:04:07.835 00:19:33 -- common/autotest_common.sh@1602 -- # (( ++bdf_id )) 00:04:07.835 00:19:33 -- common/autotest_common.sh@1605 -- # killprocess 743859 00:04:07.835 00:19:33 -- common/autotest_common.sh@947 -- # '[' -z 743859 ']' 00:04:07.835 00:19:33 -- common/autotest_common.sh@951 -- # kill -0 743859 00:04:07.835 00:19:33 -- common/autotest_common.sh@952 -- # uname 00:04:07.835 00:19:33 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:07.835 00:19:33 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 743859 00:04:07.835 00:19:33 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:07.835 00:19:33 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:07.835 00:19:33 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 743859' 00:04:07.835 killing process with pid 743859 00:04:07.835 00:19:33 -- common/autotest_common.sh@966 -- # kill 743859 00:04:07.835 00:19:33 -- common/autotest_common.sh@971 -- # wait 743859 00:04:09.733 00:19:35 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:09.733 00:19:35 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:09.733 00:19:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:09.733 00:19:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:09.733 00:19:35 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:09.733 00:19:35 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:09.733 00:19:35 -- common/autotest_common.sh@10 -- # set +x 00:04:09.733 00:19:35 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:09.733 00:19:35 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:09.733 00:19:35 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:09.733 00:19:35 -- common/autotest_common.sh@10 -- # set +x 00:04:09.733 ************************************ 00:04:09.733 START TEST env 00:04:09.733 ************************************ 00:04:09.733 00:19:35 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:09.733 * Looking for test storage... 00:04:09.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:09.733 00:19:35 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:09.733 00:19:35 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:09.733 00:19:35 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:09.733 00:19:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.733 ************************************ 00:04:09.733 START TEST env_memory 00:04:09.733 ************************************ 00:04:09.733 00:19:35 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:09.733 00:04:09.733 00:04:09.733 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.733 http://cunit.sourceforge.net/ 00:04:09.733 00:04:09.733 00:04:09.733 Suite: memory 00:04:09.733 Test: alloc and free memory map ...[2024-05-15 00:19:35.672300] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:09.733 passed 00:04:09.733 Test: mem map translation ...[2024-05-15 00:19:35.693101] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:09.733 [2024-05-15 00:19:35.693122] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:09.733 [2024-05-15 00:19:35.693177] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:09.733 [2024-05-15 00:19:35.693189] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:09.733 passed 00:04:09.733 Test: mem map registration ...[2024-05-15 00:19:35.733550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:09.733 [2024-05-15 00:19:35.733569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:09.733 passed 00:04:09.733 Test: mem map adjacent registrations ...passed 00:04:09.733 00:04:09.733 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.733 suites 1 1 n/a 0 0 00:04:09.733 tests 4 4 4 0 0 00:04:09.733 asserts 152 152 152 0 n/a 00:04:09.733 00:04:09.734 Elapsed time = 0.141 seconds 00:04:09.734 00:04:09.734 real 0m0.149s 00:04:09.734 user 0m0.142s 00:04:09.734 sys 0m0.007s 00:04:09.734 00:19:35 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:09.734 00:19:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:09.734 ************************************ 00:04:09.734 END TEST env_memory 00:04:09.734 ************************************ 00:04:09.734 00:19:35 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:09.734 00:19:35 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:09.734 00:19:35 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:09.734 00:19:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.734 ************************************ 00:04:09.734 START TEST env_vtophys 00:04:09.734 ************************************ 00:04:09.734 00:19:35 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:09.734 EAL: lib.eal log level changed from notice to debug 00:04:09.734 EAL: Detected lcore 0 as core 0 on socket 0 00:04:09.734 EAL: Detected lcore 1 as core 1 on socket 0 00:04:09.734 EAL: Detected lcore 2 as core 2 on socket 0 00:04:09.734 EAL: Detected lcore 3 as core 3 on socket 0 00:04:09.734 EAL: Detected lcore 4 as core 4 on socket 0 00:04:09.734 EAL: Detected lcore 5 as core 5 on socket 0 00:04:09.734 EAL: Detected lcore 6 as core 8 on socket 0 00:04:09.734 EAL: Detected lcore 7 as core 9 on socket 0 00:04:09.734 EAL: Detected lcore 8 as core 10 on socket 0 00:04:09.734 EAL: Detected lcore 9 as core 11 on socket 0 00:04:09.734 EAL: Detected lcore 10 as core 12 on socket 0 00:04:09.734 EAL: Detected lcore 11 as core 13 on socket 0 00:04:09.734 EAL: Detected lcore 12 as core 0 on socket 1 00:04:09.734 EAL: Detected lcore 13 as core 1 on socket 1 00:04:09.734 EAL: Detected lcore 14 as core 2 on socket 1 00:04:09.734 EAL: Detected lcore 15 as core 3 on socket 1 00:04:09.734 EAL: Detected lcore 16 as core 4 on socket 1 00:04:09.734 EAL: Detected lcore 17 as core 5 on socket 1 00:04:09.734 EAL: Detected lcore 18 as core 8 on socket 1 00:04:09.734 EAL: Detected lcore 19 as core 9 on socket 1 00:04:09.734 EAL: Detected lcore 20 as core 10 on socket 1 00:04:09.734 EAL: Detected lcore 21 as core 11 on socket 1 00:04:09.734 EAL: Detected lcore 22 as core 12 on socket 1 00:04:09.734 EAL: Detected lcore 23 as core 13 on socket 1 00:04:09.734 EAL: Detected lcore 24 as core 0 on socket 0 00:04:09.734 EAL: Detected lcore 25 as core 1 on socket 0 00:04:09.734 EAL: Detected lcore 26 as core 2 on socket 0 00:04:09.734 EAL: Detected lcore 27 as core 3 on socket 0 00:04:09.734 EAL: Detected lcore 28 as core 4 on socket 0 00:04:09.734 EAL: Detected lcore 29 as core 5 on socket 0 00:04:09.734 EAL: Detected lcore 30 as core 8 on socket 0 00:04:09.734 EAL: Detected lcore 31 as core 9 on socket 0 00:04:09.734 EAL: Detected lcore 32 as core 10 on socket 0 00:04:09.734 EAL: Detected lcore 33 as core 11 on socket 0 00:04:09.734 EAL: Detected lcore 34 as core 12 on socket 0 00:04:09.734 EAL: Detected lcore 35 as core 13 on socket 0 00:04:09.734 EAL: Detected lcore 36 as core 0 on socket 1 00:04:09.734 EAL: Detected lcore 37 as core 1 on socket 1 00:04:09.734 EAL: Detected lcore 38 as core 2 on socket 1 00:04:09.734 EAL: Detected lcore 39 as core 3 on socket 1 00:04:09.734 EAL: Detected lcore 40 as core 4 on socket 1 00:04:09.734 EAL: Detected lcore 41 as core 5 on socket 1 00:04:09.734 EAL: Detected lcore 42 as core 8 on socket 1 00:04:09.734 EAL: Detected lcore 43 as core 9 on socket 1 00:04:09.734 EAL: Detected lcore 44 as core 10 on socket 1 00:04:09.734 EAL: Detected lcore 45 as core 11 on socket 1 00:04:09.734 EAL: Detected lcore 46 as core 12 on socket 1 00:04:09.734 EAL: Detected lcore 47 as core 13 on socket 1 00:04:09.734 EAL: Maximum logical cores by configuration: 128 00:04:09.734 EAL: Detected CPU lcores: 48 00:04:09.734 EAL: Detected NUMA nodes: 2 00:04:09.734 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:09.734 EAL: Detected shared linkage of DPDK 00:04:09.734 EAL: No shared files mode enabled, IPC will be disabled 00:04:09.734 EAL: Bus pci wants IOVA as 'DC' 00:04:09.734 EAL: Buses did not request a specific IOVA mode. 00:04:09.734 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:09.734 EAL: Selected IOVA mode 'VA' 00:04:09.734 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.734 EAL: Probing VFIO support... 00:04:09.734 EAL: IOMMU type 1 (Type 1) is supported 00:04:09.734 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:09.734 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:09.734 EAL: VFIO support initialized 00:04:09.734 EAL: Ask a virtual area of 0x2e000 bytes 00:04:09.734 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:09.734 EAL: Setting up physically contiguous memory... 00:04:09.734 EAL: Setting maximum number of open files to 524288 00:04:09.734 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:09.734 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:09.734 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:09.734 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.734 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:09.734 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.734 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.734 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:09.734 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:09.734 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.734 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:09.734 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.734 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.734 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:09.734 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:09.734 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.734 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:09.734 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.734 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.734 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:09.734 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:09.734 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.734 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:09.734 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.734 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.734 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:09.734 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:09.734 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:09.734 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.734 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:09.734 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.734 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.734 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:09.734 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:09.734 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.734 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:09.734 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.734 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.734 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:09.734 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:09.734 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.734 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:09.734 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.734 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.734 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:09.734 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:09.734 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.734 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:09.734 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.734 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.734 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:09.734 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:09.734 EAL: Hugepages will be freed exactly as allocated. 00:04:09.734 EAL: No shared files mode enabled, IPC is disabled 00:04:09.734 EAL: No shared files mode enabled, IPC is disabled 00:04:09.734 EAL: TSC frequency is ~2700000 KHz 00:04:09.734 EAL: Main lcore 0 is ready (tid=7f11a81f5a00;cpuset=[0]) 00:04:09.734 EAL: Trying to obtain current memory policy. 00:04:09.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.734 EAL: Restoring previous memory policy: 0 00:04:09.734 EAL: request: mp_malloc_sync 00:04:09.734 EAL: No shared files mode enabled, IPC is disabled 00:04:09.734 EAL: Heap on socket 0 was expanded by 2MB 00:04:09.734 EAL: No shared files mode enabled, IPC is disabled 00:04:09.992 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:09.992 EAL: Mem event callback 'spdk:(nil)' registered 00:04:09.993 00:04:09.993 00:04:09.993 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.993 http://cunit.sourceforge.net/ 00:04:09.993 00:04:09.993 00:04:09.993 Suite: components_suite 00:04:09.993 Test: vtophys_malloc_test ...passed 00:04:09.993 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:09.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.993 EAL: Restoring previous memory policy: 4 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was expanded by 4MB 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was shrunk by 4MB 00:04:09.993 EAL: Trying to obtain current memory policy. 00:04:09.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.993 EAL: Restoring previous memory policy: 4 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was expanded by 6MB 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was shrunk by 6MB 00:04:09.993 EAL: Trying to obtain current memory policy. 00:04:09.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.993 EAL: Restoring previous memory policy: 4 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was expanded by 10MB 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was shrunk by 10MB 00:04:09.993 EAL: Trying to obtain current memory policy. 00:04:09.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.993 EAL: Restoring previous memory policy: 4 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was expanded by 18MB 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was shrunk by 18MB 00:04:09.993 EAL: Trying to obtain current memory policy. 00:04:09.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.993 EAL: Restoring previous memory policy: 4 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was expanded by 34MB 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was shrunk by 34MB 00:04:09.993 EAL: Trying to obtain current memory policy. 00:04:09.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.993 EAL: Restoring previous memory policy: 4 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was expanded by 66MB 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was shrunk by 66MB 00:04:09.993 EAL: Trying to obtain current memory policy. 00:04:09.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.993 EAL: Restoring previous memory policy: 4 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was expanded by 130MB 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was shrunk by 130MB 00:04:09.993 EAL: Trying to obtain current memory policy. 00:04:09.993 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.993 EAL: Restoring previous memory policy: 4 00:04:09.993 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.993 EAL: request: mp_malloc_sync 00:04:09.993 EAL: No shared files mode enabled, IPC is disabled 00:04:09.993 EAL: Heap on socket 0 was expanded by 258MB 00:04:10.251 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.251 EAL: request: mp_malloc_sync 00:04:10.251 EAL: No shared files mode enabled, IPC is disabled 00:04:10.251 EAL: Heap on socket 0 was shrunk by 258MB 00:04:10.251 EAL: Trying to obtain current memory policy. 00:04:10.251 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.251 EAL: Restoring previous memory policy: 4 00:04:10.251 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.251 EAL: request: mp_malloc_sync 00:04:10.251 EAL: No shared files mode enabled, IPC is disabled 00:04:10.251 EAL: Heap on socket 0 was expanded by 514MB 00:04:10.509 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.509 EAL: request: mp_malloc_sync 00:04:10.509 EAL: No shared files mode enabled, IPC is disabled 00:04:10.509 EAL: Heap on socket 0 was shrunk by 514MB 00:04:10.509 EAL: Trying to obtain current memory policy. 00:04:10.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.767 EAL: Restoring previous memory policy: 4 00:04:10.767 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.767 EAL: request: mp_malloc_sync 00:04:10.767 EAL: No shared files mode enabled, IPC is disabled 00:04:10.767 EAL: Heap on socket 0 was expanded by 1026MB 00:04:11.024 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.282 EAL: request: mp_malloc_sync 00:04:11.282 EAL: No shared files mode enabled, IPC is disabled 00:04:11.282 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:11.282 passed 00:04:11.282 00:04:11.282 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.282 suites 1 1 n/a 0 0 00:04:11.282 tests 2 2 2 0 0 00:04:11.282 asserts 497 497 497 0 n/a 00:04:11.282 00:04:11.282 Elapsed time = 1.365 seconds 00:04:11.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.282 EAL: request: mp_malloc_sync 00:04:11.282 EAL: No shared files mode enabled, IPC is disabled 00:04:11.282 EAL: Heap on socket 0 was shrunk by 2MB 00:04:11.282 EAL: No shared files mode enabled, IPC is disabled 00:04:11.282 EAL: No shared files mode enabled, IPC is disabled 00:04:11.282 EAL: No shared files mode enabled, IPC is disabled 00:04:11.282 00:04:11.282 real 0m1.496s 00:04:11.282 user 0m0.855s 00:04:11.282 sys 0m0.607s 00:04:11.282 00:19:37 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:11.282 00:19:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:11.282 ************************************ 00:04:11.282 END TEST env_vtophys 00:04:11.282 ************************************ 00:04:11.282 00:19:37 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:11.282 00:19:37 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:11.282 00:19:37 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:11.282 00:19:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.282 ************************************ 00:04:11.282 START TEST env_pci 00:04:11.282 ************************************ 00:04:11.282 00:19:37 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:11.282 00:04:11.282 00:04:11.282 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.283 http://cunit.sourceforge.net/ 00:04:11.283 00:04:11.283 00:04:11.283 Suite: pci 00:04:11.283 Test: pci_hook ...[2024-05-15 00:19:37.394038] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 744883 has claimed it 00:04:11.283 EAL: Cannot find device (10000:00:01.0) 00:04:11.283 EAL: Failed to attach device on primary process 00:04:11.283 passed 00:04:11.283 00:04:11.283 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.283 suites 1 1 n/a 0 0 00:04:11.283 tests 1 1 1 0 0 00:04:11.283 asserts 25 25 25 0 n/a 00:04:11.283 00:04:11.283 Elapsed time = 0.027 seconds 00:04:11.283 00:04:11.283 real 0m0.039s 00:04:11.283 user 0m0.011s 00:04:11.283 sys 0m0.028s 00:04:11.283 00:19:37 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:11.283 00:19:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:11.283 ************************************ 00:04:11.283 END TEST env_pci 00:04:11.283 ************************************ 00:04:11.283 00:19:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:11.283 00:19:37 env -- env/env.sh@15 -- # uname 00:04:11.541 00:19:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:11.541 00:19:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:11.541 00:19:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:11.541 00:19:37 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:04:11.541 00:19:37 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:11.541 00:19:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.541 ************************************ 00:04:11.541 START TEST env_dpdk_post_init 00:04:11.541 ************************************ 00:04:11.541 00:19:37 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:11.541 EAL: Detected CPU lcores: 48 00:04:11.541 EAL: Detected NUMA nodes: 2 00:04:11.541 EAL: Detected shared linkage of DPDK 00:04:11.541 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:11.541 EAL: Selected IOVA mode 'VA' 00:04:11.541 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.541 EAL: VFIO support initialized 00:04:11.541 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:11.541 EAL: Using IOMMU type 1 (Type 1) 00:04:11.541 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:11.541 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:11.541 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:11.541 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:11.541 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:11.541 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:11.541 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:11.800 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:11.800 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:11.800 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:11.800 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:11.800 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:11.800 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:11.800 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:11.800 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:11.800 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:12.736 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:16.016 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:16.016 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:16.016 Starting DPDK initialization... 00:04:16.016 Starting SPDK post initialization... 00:04:16.016 SPDK NVMe probe 00:04:16.016 Attaching to 0000:88:00.0 00:04:16.016 Attached to 0000:88:00.0 00:04:16.016 Cleaning up... 00:04:16.016 00:04:16.016 real 0m4.421s 00:04:16.016 user 0m3.266s 00:04:16.016 sys 0m0.209s 00:04:16.016 00:19:41 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:16.016 00:19:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:16.016 ************************************ 00:04:16.016 END TEST env_dpdk_post_init 00:04:16.016 ************************************ 00:04:16.016 00:19:41 env -- env/env.sh@26 -- # uname 00:04:16.016 00:19:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:16.016 00:19:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:16.016 00:19:41 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:16.016 00:19:41 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:16.016 00:19:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.016 ************************************ 00:04:16.016 START TEST env_mem_callbacks 00:04:16.016 ************************************ 00:04:16.016 00:19:41 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:16.016 EAL: Detected CPU lcores: 48 00:04:16.016 EAL: Detected NUMA nodes: 2 00:04:16.016 EAL: Detected shared linkage of DPDK 00:04:16.016 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:16.017 EAL: Selected IOVA mode 'VA' 00:04:16.017 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.017 EAL: VFIO support initialized 00:04:16.017 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:16.017 00:04:16.017 00:04:16.017 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.017 http://cunit.sourceforge.net/ 00:04:16.017 00:04:16.017 00:04:16.017 Suite: memory 00:04:16.017 Test: test ... 00:04:16.017 register 0x200000200000 2097152 00:04:16.017 malloc 3145728 00:04:16.017 register 0x200000400000 4194304 00:04:16.017 buf 0x200000500000 len 3145728 PASSED 00:04:16.017 malloc 64 00:04:16.017 buf 0x2000004fff40 len 64 PASSED 00:04:16.017 malloc 4194304 00:04:16.017 register 0x200000800000 6291456 00:04:16.017 buf 0x200000a00000 len 4194304 PASSED 00:04:16.017 free 0x200000500000 3145728 00:04:16.017 free 0x2000004fff40 64 00:04:16.017 unregister 0x200000400000 4194304 PASSED 00:04:16.017 free 0x200000a00000 4194304 00:04:16.017 unregister 0x200000800000 6291456 PASSED 00:04:16.017 malloc 8388608 00:04:16.017 register 0x200000400000 10485760 00:04:16.017 buf 0x200000600000 len 8388608 PASSED 00:04:16.017 free 0x200000600000 8388608 00:04:16.017 unregister 0x200000400000 10485760 PASSED 00:04:16.017 passed 00:04:16.017 00:04:16.017 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.017 suites 1 1 n/a 0 0 00:04:16.017 tests 1 1 1 0 0 00:04:16.017 asserts 15 15 15 0 n/a 00:04:16.017 00:04:16.017 Elapsed time = 0.005 seconds 00:04:16.017 00:04:16.017 real 0m0.053s 00:04:16.017 user 0m0.020s 00:04:16.017 sys 0m0.033s 00:04:16.017 00:19:42 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:16.017 00:19:42 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:16.017 ************************************ 00:04:16.017 END TEST env_mem_callbacks 00:04:16.017 ************************************ 00:04:16.017 00:04:16.017 real 0m6.469s 00:04:16.017 user 0m4.410s 00:04:16.017 sys 0m1.087s 00:04:16.017 00:19:42 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:16.017 00:19:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.017 ************************************ 00:04:16.017 END TEST env 00:04:16.017 ************************************ 00:04:16.017 00:19:42 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:16.017 00:19:42 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:16.017 00:19:42 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:16.017 00:19:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.017 ************************************ 00:04:16.017 START TEST rpc 00:04:16.017 ************************************ 00:04:16.017 00:19:42 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:16.017 * Looking for test storage... 00:04:16.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:16.017 00:19:42 rpc -- rpc/rpc.sh@65 -- # spdk_pid=745532 00:04:16.017 00:19:42 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:16.017 00:19:42 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.017 00:19:42 rpc -- rpc/rpc.sh@67 -- # waitforlisten 745532 00:04:16.017 00:19:42 rpc -- common/autotest_common.sh@828 -- # '[' -z 745532 ']' 00:04:16.017 00:19:42 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.017 00:19:42 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:16.017 00:19:42 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.017 00:19:42 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:16.017 00:19:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.276 [2024-05-15 00:19:42.185141] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:16.276 [2024-05-15 00:19:42.185219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745532 ] 00:04:16.276 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.276 [2024-05-15 00:19:42.251724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.276 [2024-05-15 00:19:42.357424] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:16.276 [2024-05-15 00:19:42.357496] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 745532' to capture a snapshot of events at runtime. 00:04:16.276 [2024-05-15 00:19:42.357510] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:16.276 [2024-05-15 00:19:42.357520] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:16.276 [2024-05-15 00:19:42.357530] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid745532 for offline analysis/debug. 00:04:16.276 [2024-05-15 00:19:42.357559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.534 00:19:42 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:16.534 00:19:42 rpc -- common/autotest_common.sh@861 -- # return 0 00:04:16.534 00:19:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:16.534 00:19:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:16.534 00:19:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:16.534 00:19:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:16.534 00:19:42 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:16.534 00:19:42 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:16.534 00:19:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.534 ************************************ 00:04:16.534 START TEST rpc_integrity 00:04:16.534 ************************************ 00:04:16.534 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:16.534 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.534 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:16.534 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.534 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:16.534 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.534 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.535 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.535 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.535 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:16.535 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.793 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:16.793 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:16.793 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.793 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:16.793 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.793 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:16.793 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.793 { 00:04:16.793 "name": "Malloc0", 00:04:16.793 "aliases": [ 00:04:16.793 "8f4b5754-e40d-4191-9c55-a31e98739f84" 00:04:16.793 ], 00:04:16.793 "product_name": "Malloc disk", 00:04:16.793 "block_size": 512, 00:04:16.793 "num_blocks": 16384, 00:04:16.793 "uuid": "8f4b5754-e40d-4191-9c55-a31e98739f84", 00:04:16.793 "assigned_rate_limits": { 00:04:16.793 "rw_ios_per_sec": 0, 00:04:16.793 "rw_mbytes_per_sec": 0, 00:04:16.793 "r_mbytes_per_sec": 0, 00:04:16.793 "w_mbytes_per_sec": 0 00:04:16.793 }, 00:04:16.793 "claimed": false, 00:04:16.793 "zoned": false, 00:04:16.793 "supported_io_types": { 00:04:16.793 "read": true, 00:04:16.793 "write": true, 00:04:16.793 "unmap": true, 00:04:16.793 "write_zeroes": true, 00:04:16.793 "flush": true, 00:04:16.793 "reset": true, 00:04:16.793 "compare": false, 00:04:16.793 "compare_and_write": false, 00:04:16.793 "abort": true, 00:04:16.793 "nvme_admin": false, 00:04:16.793 "nvme_io": false 00:04:16.793 }, 00:04:16.793 "memory_domains": [ 00:04:16.793 { 00:04:16.793 "dma_device_id": "system", 00:04:16.793 "dma_device_type": 1 00:04:16.793 }, 00:04:16.793 { 00:04:16.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.793 "dma_device_type": 2 00:04:16.793 } 00:04:16.793 ], 00:04:16.793 "driver_specific": {} 00:04:16.793 } 00:04:16.793 ]' 00:04:16.793 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.793 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.793 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:16.793 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:16.793 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.793 [2024-05-15 00:19:42.751777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:16.793 [2024-05-15 00:19:42.751822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.793 [2024-05-15 00:19:42.751846] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc20c10 00:04:16.793 [2024-05-15 00:19:42.751861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.793 [2024-05-15 00:19:42.753340] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.793 [2024-05-15 00:19:42.753370] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.793 Passthru0 00:04:16.793 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:16.793 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.793 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:16.793 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.793 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:16.793 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.793 { 00:04:16.793 "name": "Malloc0", 00:04:16.793 "aliases": [ 00:04:16.793 "8f4b5754-e40d-4191-9c55-a31e98739f84" 00:04:16.793 ], 00:04:16.793 "product_name": "Malloc disk", 00:04:16.793 "block_size": 512, 00:04:16.793 "num_blocks": 16384, 00:04:16.793 "uuid": "8f4b5754-e40d-4191-9c55-a31e98739f84", 00:04:16.793 "assigned_rate_limits": { 00:04:16.793 "rw_ios_per_sec": 0, 00:04:16.793 "rw_mbytes_per_sec": 0, 00:04:16.793 "r_mbytes_per_sec": 0, 00:04:16.793 "w_mbytes_per_sec": 0 00:04:16.793 }, 00:04:16.793 "claimed": true, 00:04:16.793 "claim_type": "exclusive_write", 00:04:16.793 "zoned": false, 00:04:16.793 "supported_io_types": { 00:04:16.793 "read": true, 00:04:16.793 "write": true, 00:04:16.793 "unmap": true, 00:04:16.793 "write_zeroes": true, 00:04:16.793 "flush": true, 00:04:16.794 "reset": true, 00:04:16.794 "compare": false, 00:04:16.794 "compare_and_write": false, 00:04:16.794 "abort": true, 00:04:16.794 "nvme_admin": false, 00:04:16.794 "nvme_io": false 00:04:16.794 }, 00:04:16.794 "memory_domains": [ 00:04:16.794 { 00:04:16.794 "dma_device_id": "system", 00:04:16.794 "dma_device_type": 1 00:04:16.794 }, 00:04:16.794 { 00:04:16.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.794 "dma_device_type": 2 00:04:16.794 } 00:04:16.794 ], 00:04:16.794 "driver_specific": {} 00:04:16.794 }, 00:04:16.794 { 00:04:16.794 "name": "Passthru0", 00:04:16.794 "aliases": [ 00:04:16.794 "a1fc688c-3a2e-5f9e-8178-5491ebdc1182" 00:04:16.794 ], 00:04:16.794 "product_name": "passthru", 00:04:16.794 "block_size": 512, 00:04:16.794 "num_blocks": 16384, 00:04:16.794 "uuid": "a1fc688c-3a2e-5f9e-8178-5491ebdc1182", 00:04:16.794 "assigned_rate_limits": { 00:04:16.794 "rw_ios_per_sec": 0, 00:04:16.794 "rw_mbytes_per_sec": 0, 00:04:16.794 "r_mbytes_per_sec": 0, 00:04:16.794 "w_mbytes_per_sec": 0 00:04:16.794 }, 00:04:16.794 "claimed": false, 00:04:16.794 "zoned": false, 00:04:16.794 "supported_io_types": { 00:04:16.794 "read": true, 00:04:16.794 "write": true, 00:04:16.794 "unmap": true, 00:04:16.794 "write_zeroes": true, 00:04:16.794 "flush": true, 00:04:16.794 "reset": true, 00:04:16.794 "compare": false, 00:04:16.794 "compare_and_write": false, 00:04:16.794 "abort": true, 00:04:16.794 "nvme_admin": false, 00:04:16.794 "nvme_io": false 00:04:16.794 }, 00:04:16.794 "memory_domains": [ 00:04:16.794 { 00:04:16.794 "dma_device_id": "system", 00:04:16.794 "dma_device_type": 1 00:04:16.794 }, 00:04:16.794 { 00:04:16.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.794 "dma_device_type": 2 00:04:16.794 } 00:04:16.794 ], 00:04:16.794 "driver_specific": { 00:04:16.794 "passthru": { 00:04:16.794 "name": "Passthru0", 00:04:16.794 "base_bdev_name": "Malloc0" 00:04:16.794 } 00:04:16.794 } 00:04:16.794 } 00:04:16.794 ]' 00:04:16.794 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.794 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.794 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.794 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:16.794 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.794 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:16.794 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:16.794 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:16.794 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.794 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:16.794 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.794 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:16.794 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.794 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:16.794 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.794 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.794 00:19:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.794 00:04:16.794 real 0m0.222s 00:04:16.794 user 0m0.146s 00:04:16.794 sys 0m0.021s 00:04:16.794 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:16.794 00:19:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.794 ************************************ 00:04:16.794 END TEST rpc_integrity 00:04:16.794 ************************************ 00:04:16.794 00:19:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.794 00:19:42 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:16.794 00:19:42 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:16.794 00:19:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.794 ************************************ 00:04:16.794 START TEST rpc_plugins 00:04:16.794 ************************************ 00:04:16.794 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:04:16.794 00:19:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.794 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:16.794 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.794 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:16.794 00:19:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.794 00:19:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.794 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:16.794 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.794 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:16.794 00:19:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:16.794 { 00:04:16.794 "name": "Malloc1", 00:04:16.794 "aliases": [ 00:04:16.794 "d9f34937-7c8f-41ca-9041-d0b93681b0e8" 00:04:16.794 ], 00:04:16.794 "product_name": "Malloc disk", 00:04:16.794 "block_size": 4096, 00:04:16.794 "num_blocks": 256, 00:04:16.794 "uuid": "d9f34937-7c8f-41ca-9041-d0b93681b0e8", 00:04:16.794 "assigned_rate_limits": { 00:04:16.794 "rw_ios_per_sec": 0, 00:04:16.794 "rw_mbytes_per_sec": 0, 00:04:16.794 "r_mbytes_per_sec": 0, 00:04:16.794 "w_mbytes_per_sec": 0 00:04:16.794 }, 00:04:16.794 "claimed": false, 00:04:16.794 "zoned": false, 00:04:16.794 "supported_io_types": { 00:04:16.794 "read": true, 00:04:16.794 "write": true, 00:04:16.794 "unmap": true, 00:04:16.794 "write_zeroes": true, 00:04:16.794 "flush": true, 00:04:16.794 "reset": true, 00:04:16.794 "compare": false, 00:04:16.794 "compare_and_write": false, 00:04:16.794 "abort": true, 00:04:16.794 "nvme_admin": false, 00:04:16.794 "nvme_io": false 00:04:16.794 }, 00:04:16.794 "memory_domains": [ 00:04:16.794 { 00:04:16.794 "dma_device_id": "system", 00:04:16.794 "dma_device_type": 1 00:04:16.794 }, 00:04:16.794 { 00:04:16.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.794 "dma_device_type": 2 00:04:16.794 } 00:04:16.794 ], 00:04:16.794 "driver_specific": {} 00:04:16.794 } 00:04:16.794 ]' 00:04:16.794 00:19:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:17.052 00:19:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:17.052 00:19:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:17.052 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:17.052 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.052 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:17.052 00:19:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:17.052 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:17.052 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.052 00:19:42 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:17.052 00:19:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:17.052 00:19:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:17.052 00:19:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:17.052 00:04:17.052 real 0m0.107s 00:04:17.052 user 0m0.071s 00:04:17.052 sys 0m0.010s 00:04:17.052 00:19:43 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:17.052 00:19:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.052 ************************************ 00:04:17.052 END TEST rpc_plugins 00:04:17.052 ************************************ 00:04:17.052 00:19:43 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:17.052 00:19:43 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:17.052 00:19:43 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:17.052 00:19:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.052 ************************************ 00:04:17.052 START TEST rpc_trace_cmd_test 00:04:17.052 ************************************ 00:04:17.052 00:19:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:04:17.052 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:17.052 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:17.052 00:19:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:17.052 00:19:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.052 00:19:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:17.052 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:17.053 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid745532", 00:04:17.053 "tpoint_group_mask": "0x8", 00:04:17.053 "iscsi_conn": { 00:04:17.053 "mask": "0x2", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "scsi": { 00:04:17.053 "mask": "0x4", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "bdev": { 00:04:17.053 "mask": "0x8", 00:04:17.053 "tpoint_mask": "0xffffffffffffffff" 00:04:17.053 }, 00:04:17.053 "nvmf_rdma": { 00:04:17.053 "mask": "0x10", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "nvmf_tcp": { 00:04:17.053 "mask": "0x20", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "ftl": { 00:04:17.053 "mask": "0x40", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "blobfs": { 00:04:17.053 "mask": "0x80", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "dsa": { 00:04:17.053 "mask": "0x200", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "thread": { 00:04:17.053 "mask": "0x400", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "nvme_pcie": { 00:04:17.053 "mask": "0x800", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "iaa": { 00:04:17.053 "mask": "0x1000", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "nvme_tcp": { 00:04:17.053 "mask": "0x2000", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "bdev_nvme": { 00:04:17.053 "mask": "0x4000", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 }, 00:04:17.053 "sock": { 00:04:17.053 "mask": "0x8000", 00:04:17.053 "tpoint_mask": "0x0" 00:04:17.053 } 00:04:17.053 }' 00:04:17.053 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:17.053 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:17.053 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:17.053 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:17.053 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:17.053 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:17.053 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:17.311 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:17.311 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:17.311 00:19:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:17.311 00:04:17.311 real 0m0.199s 00:04:17.311 user 0m0.181s 00:04:17.311 sys 0m0.011s 00:04:17.311 00:19:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:17.311 00:19:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.311 ************************************ 00:04:17.311 END TEST rpc_trace_cmd_test 00:04:17.311 ************************************ 00:04:17.311 00:19:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:17.311 00:19:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:17.311 00:19:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:17.311 00:19:43 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:17.311 00:19:43 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:17.311 00:19:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.311 ************************************ 00:04:17.311 START TEST rpc_daemon_integrity 00:04:17.311 ************************************ 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.311 { 00:04:17.311 "name": "Malloc2", 00:04:17.311 "aliases": [ 00:04:17.311 "e7f5919d-1c25-43d9-802a-6180f07c2624" 00:04:17.311 ], 00:04:17.311 "product_name": "Malloc disk", 00:04:17.311 "block_size": 512, 00:04:17.311 "num_blocks": 16384, 00:04:17.311 "uuid": "e7f5919d-1c25-43d9-802a-6180f07c2624", 00:04:17.311 "assigned_rate_limits": { 00:04:17.311 "rw_ios_per_sec": 0, 00:04:17.311 "rw_mbytes_per_sec": 0, 00:04:17.311 "r_mbytes_per_sec": 0, 00:04:17.311 "w_mbytes_per_sec": 0 00:04:17.311 }, 00:04:17.311 "claimed": false, 00:04:17.311 "zoned": false, 00:04:17.311 "supported_io_types": { 00:04:17.311 "read": true, 00:04:17.311 "write": true, 00:04:17.311 "unmap": true, 00:04:17.311 "write_zeroes": true, 00:04:17.311 "flush": true, 00:04:17.311 "reset": true, 00:04:17.311 "compare": false, 00:04:17.311 "compare_and_write": false, 00:04:17.311 "abort": true, 00:04:17.311 "nvme_admin": false, 00:04:17.311 "nvme_io": false 00:04:17.311 }, 00:04:17.311 "memory_domains": [ 00:04:17.311 { 00:04:17.311 "dma_device_id": "system", 00:04:17.311 "dma_device_type": 1 00:04:17.311 }, 00:04:17.311 { 00:04:17.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.311 "dma_device_type": 2 00:04:17.311 } 00:04:17.311 ], 00:04:17.311 "driver_specific": {} 00:04:17.311 } 00:04:17.311 ]' 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.311 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:17.312 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:17.312 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.312 [2024-05-15 00:19:43.438051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:17.312 [2024-05-15 00:19:43.438090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.312 [2024-05-15 00:19:43.438120] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc207b0 00:04:17.312 [2024-05-15 00:19:43.438134] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.312 [2024-05-15 00:19:43.439505] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.312 [2024-05-15 00:19:43.439535] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.312 Passthru0 00:04:17.312 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:17.312 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.312 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:17.312 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.312 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:17.312 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.312 { 00:04:17.312 "name": "Malloc2", 00:04:17.312 "aliases": [ 00:04:17.312 "e7f5919d-1c25-43d9-802a-6180f07c2624" 00:04:17.312 ], 00:04:17.312 "product_name": "Malloc disk", 00:04:17.312 "block_size": 512, 00:04:17.312 "num_blocks": 16384, 00:04:17.312 "uuid": "e7f5919d-1c25-43d9-802a-6180f07c2624", 00:04:17.312 "assigned_rate_limits": { 00:04:17.312 "rw_ios_per_sec": 0, 00:04:17.312 "rw_mbytes_per_sec": 0, 00:04:17.312 "r_mbytes_per_sec": 0, 00:04:17.312 "w_mbytes_per_sec": 0 00:04:17.312 }, 00:04:17.312 "claimed": true, 00:04:17.312 "claim_type": "exclusive_write", 00:04:17.312 "zoned": false, 00:04:17.312 "supported_io_types": { 00:04:17.312 "read": true, 00:04:17.312 "write": true, 00:04:17.312 "unmap": true, 00:04:17.312 "write_zeroes": true, 00:04:17.312 "flush": true, 00:04:17.312 "reset": true, 00:04:17.312 "compare": false, 00:04:17.312 "compare_and_write": false, 00:04:17.312 "abort": true, 00:04:17.312 "nvme_admin": false, 00:04:17.312 "nvme_io": false 00:04:17.312 }, 00:04:17.312 "memory_domains": [ 00:04:17.312 { 00:04:17.312 "dma_device_id": "system", 00:04:17.312 "dma_device_type": 1 00:04:17.312 }, 00:04:17.312 { 00:04:17.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.312 "dma_device_type": 2 00:04:17.312 } 00:04:17.312 ], 00:04:17.312 "driver_specific": {} 00:04:17.312 }, 00:04:17.312 { 00:04:17.312 "name": "Passthru0", 00:04:17.312 "aliases": [ 00:04:17.312 "8bf1ea42-f15b-5795-8650-32ae7b629682" 00:04:17.312 ], 00:04:17.312 "product_name": "passthru", 00:04:17.312 "block_size": 512, 00:04:17.312 "num_blocks": 16384, 00:04:17.312 "uuid": "8bf1ea42-f15b-5795-8650-32ae7b629682", 00:04:17.312 "assigned_rate_limits": { 00:04:17.312 "rw_ios_per_sec": 0, 00:04:17.312 "rw_mbytes_per_sec": 0, 00:04:17.312 "r_mbytes_per_sec": 0, 00:04:17.312 "w_mbytes_per_sec": 0 00:04:17.312 }, 00:04:17.312 "claimed": false, 00:04:17.312 "zoned": false, 00:04:17.312 "supported_io_types": { 00:04:17.312 "read": true, 00:04:17.312 "write": true, 00:04:17.312 "unmap": true, 00:04:17.312 "write_zeroes": true, 00:04:17.312 "flush": true, 00:04:17.312 "reset": true, 00:04:17.312 "compare": false, 00:04:17.312 "compare_and_write": false, 00:04:17.312 "abort": true, 00:04:17.312 "nvme_admin": false, 00:04:17.312 "nvme_io": false 00:04:17.312 }, 00:04:17.312 "memory_domains": [ 00:04:17.312 { 00:04:17.312 "dma_device_id": "system", 00:04:17.312 "dma_device_type": 1 00:04:17.312 }, 00:04:17.312 { 00:04:17.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.312 "dma_device_type": 2 00:04:17.312 } 00:04:17.312 ], 00:04:17.312 "driver_specific": { 00:04:17.312 "passthru": { 00:04:17.312 "name": "Passthru0", 00:04:17.312 "base_bdev_name": "Malloc2" 00:04:17.312 } 00:04:17.312 } 00:04:17.312 } 00:04:17.312 ]' 00:04:17.312 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.570 00:04:17.570 real 0m0.233s 00:04:17.570 user 0m0.151s 00:04:17.570 sys 0m0.023s 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:17.570 00:19:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.570 ************************************ 00:04:17.570 END TEST rpc_daemon_integrity 00:04:17.570 ************************************ 00:04:17.570 00:19:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:17.570 00:19:43 rpc -- rpc/rpc.sh@84 -- # killprocess 745532 00:04:17.570 00:19:43 rpc -- common/autotest_common.sh@947 -- # '[' -z 745532 ']' 00:04:17.570 00:19:43 rpc -- common/autotest_common.sh@951 -- # kill -0 745532 00:04:17.570 00:19:43 rpc -- common/autotest_common.sh@952 -- # uname 00:04:17.570 00:19:43 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:17.570 00:19:43 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 745532 00:04:17.570 00:19:43 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:17.570 00:19:43 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:17.570 00:19:43 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 745532' 00:04:17.570 killing process with pid 745532 00:04:17.570 00:19:43 rpc -- common/autotest_common.sh@966 -- # kill 745532 00:04:17.570 00:19:43 rpc -- common/autotest_common.sh@971 -- # wait 745532 00:04:18.136 00:04:18.136 real 0m1.981s 00:04:18.136 user 0m2.448s 00:04:18.136 sys 0m0.607s 00:04:18.136 00:19:44 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:18.136 00:19:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.136 ************************************ 00:04:18.136 END TEST rpc 00:04:18.136 ************************************ 00:04:18.136 00:19:44 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:18.136 00:19:44 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:18.136 00:19:44 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:18.136 00:19:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.136 ************************************ 00:04:18.136 START TEST skip_rpc 00:04:18.136 ************************************ 00:04:18.136 00:19:44 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:18.136 * Looking for test storage... 00:04:18.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:18.136 00:19:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:18.136 00:19:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:18.136 00:19:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:18.136 00:19:44 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:18.136 00:19:44 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:18.136 00:19:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.136 ************************************ 00:04:18.136 START TEST skip_rpc 00:04:18.136 ************************************ 00:04:18.136 00:19:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:04:18.136 00:19:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=745973 00:04:18.136 00:19:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:18.136 00:19:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.136 00:19:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:18.136 [2024-05-15 00:19:44.248741] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:18.137 [2024-05-15 00:19:44.248803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745973 ] 00:04:18.137 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.395 [2024-05-15 00:19:44.318196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.395 [2024-05-15 00:19:44.434854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 745973 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 745973 ']' 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 745973 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 745973 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 745973' 00:04:23.661 killing process with pid 745973 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 745973 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 745973 00:04:23.661 00:04:23.661 real 0m5.493s 00:04:23.661 user 0m5.153s 00:04:23.661 sys 0m0.339s 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:23.661 00:19:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.661 ************************************ 00:04:23.661 END TEST skip_rpc 00:04:23.661 ************************************ 00:04:23.661 00:19:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:23.661 00:19:49 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:23.661 00:19:49 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:23.661 00:19:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.661 ************************************ 00:04:23.661 START TEST skip_rpc_with_json 00:04:23.661 ************************************ 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=746660 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 746660 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 746660 ']' 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:23.661 00:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.661 [2024-05-15 00:19:49.799251] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:23.661 [2024-05-15 00:19:49.799361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746660 ] 00:04:23.920 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.920 [2024-05-15 00:19:49.872571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.920 [2024-05-15 00:19:49.987175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.855 [2024-05-15 00:19:50.755504] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:24.855 request: 00:04:24.855 { 00:04:24.855 "trtype": "tcp", 00:04:24.855 "method": "nvmf_get_transports", 00:04:24.855 "req_id": 1 00:04:24.855 } 00:04:24.855 Got JSON-RPC error response 00:04:24.855 response: 00:04:24.855 { 00:04:24.855 "code": -19, 00:04:24.855 "message": "No such device" 00:04:24.855 } 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.855 [2024-05-15 00:19:50.763628] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:24.855 00:19:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:24.855 { 00:04:24.855 "subsystems": [ 00:04:24.855 { 00:04:24.855 "subsystem": "vfio_user_target", 00:04:24.855 "config": null 00:04:24.855 }, 00:04:24.855 { 00:04:24.855 "subsystem": "keyring", 00:04:24.855 "config": [] 00:04:24.855 }, 00:04:24.855 { 00:04:24.855 "subsystem": "iobuf", 00:04:24.855 "config": [ 00:04:24.855 { 00:04:24.855 "method": "iobuf_set_options", 00:04:24.855 "params": { 00:04:24.855 "small_pool_count": 8192, 00:04:24.855 "large_pool_count": 1024, 00:04:24.855 "small_bufsize": 8192, 00:04:24.855 "large_bufsize": 135168 00:04:24.855 } 00:04:24.855 } 00:04:24.855 ] 00:04:24.855 }, 00:04:24.855 { 00:04:24.855 "subsystem": "sock", 00:04:24.855 "config": [ 00:04:24.855 { 00:04:24.855 "method": "sock_impl_set_options", 00:04:24.855 "params": { 00:04:24.855 "impl_name": "posix", 00:04:24.855 "recv_buf_size": 2097152, 00:04:24.855 "send_buf_size": 2097152, 00:04:24.855 "enable_recv_pipe": true, 00:04:24.855 "enable_quickack": false, 00:04:24.855 "enable_placement_id": 0, 00:04:24.855 "enable_zerocopy_send_server": true, 00:04:24.855 "enable_zerocopy_send_client": false, 00:04:24.855 "zerocopy_threshold": 0, 00:04:24.855 "tls_version": 0, 00:04:24.855 "enable_ktls": false 00:04:24.855 } 00:04:24.855 }, 00:04:24.855 { 00:04:24.855 "method": "sock_impl_set_options", 00:04:24.855 "params": { 00:04:24.855 "impl_name": "ssl", 00:04:24.855 "recv_buf_size": 4096, 00:04:24.855 "send_buf_size": 4096, 00:04:24.855 "enable_recv_pipe": true, 00:04:24.855 "enable_quickack": false, 00:04:24.855 "enable_placement_id": 0, 00:04:24.855 "enable_zerocopy_send_server": true, 00:04:24.855 "enable_zerocopy_send_client": false, 00:04:24.855 "zerocopy_threshold": 0, 00:04:24.855 "tls_version": 0, 00:04:24.855 "enable_ktls": false 00:04:24.855 } 00:04:24.855 } 00:04:24.855 ] 00:04:24.855 }, 00:04:24.855 { 00:04:24.855 "subsystem": "vmd", 00:04:24.855 "config": [] 00:04:24.855 }, 00:04:24.855 { 00:04:24.855 "subsystem": "accel", 00:04:24.855 "config": [ 00:04:24.855 { 00:04:24.855 "method": "accel_set_options", 00:04:24.855 "params": { 00:04:24.855 "small_cache_size": 128, 00:04:24.855 "large_cache_size": 16, 00:04:24.855 "task_count": 2048, 00:04:24.855 "sequence_count": 2048, 00:04:24.855 "buf_count": 2048 00:04:24.855 } 00:04:24.855 } 00:04:24.855 ] 00:04:24.855 }, 00:04:24.855 { 00:04:24.855 "subsystem": "bdev", 00:04:24.855 "config": [ 00:04:24.855 { 00:04:24.855 "method": "bdev_set_options", 00:04:24.856 "params": { 00:04:24.856 "bdev_io_pool_size": 65535, 00:04:24.856 "bdev_io_cache_size": 256, 00:04:24.856 "bdev_auto_examine": true, 00:04:24.856 "iobuf_small_cache_size": 128, 00:04:24.856 "iobuf_large_cache_size": 16 00:04:24.856 } 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "method": "bdev_raid_set_options", 00:04:24.856 "params": { 00:04:24.856 "process_window_size_kb": 1024 00:04:24.856 } 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "method": "bdev_iscsi_set_options", 00:04:24.856 "params": { 00:04:24.856 "timeout_sec": 30 00:04:24.856 } 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "method": "bdev_nvme_set_options", 00:04:24.856 "params": { 00:04:24.856 "action_on_timeout": "none", 00:04:24.856 "timeout_us": 0, 00:04:24.856 "timeout_admin_us": 0, 00:04:24.856 "keep_alive_timeout_ms": 10000, 00:04:24.856 "arbitration_burst": 0, 00:04:24.856 "low_priority_weight": 0, 00:04:24.856 "medium_priority_weight": 0, 00:04:24.856 "high_priority_weight": 0, 00:04:24.856 "nvme_adminq_poll_period_us": 10000, 00:04:24.856 "nvme_ioq_poll_period_us": 0, 00:04:24.856 "io_queue_requests": 0, 00:04:24.856 "delay_cmd_submit": true, 00:04:24.856 "transport_retry_count": 4, 00:04:24.856 "bdev_retry_count": 3, 00:04:24.856 "transport_ack_timeout": 0, 00:04:24.856 "ctrlr_loss_timeout_sec": 0, 00:04:24.856 "reconnect_delay_sec": 0, 00:04:24.856 "fast_io_fail_timeout_sec": 0, 00:04:24.856 "disable_auto_failback": false, 00:04:24.856 "generate_uuids": false, 00:04:24.856 "transport_tos": 0, 00:04:24.856 "nvme_error_stat": false, 00:04:24.856 "rdma_srq_size": 0, 00:04:24.856 "io_path_stat": false, 00:04:24.856 "allow_accel_sequence": false, 00:04:24.856 "rdma_max_cq_size": 0, 00:04:24.856 "rdma_cm_event_timeout_ms": 0, 00:04:24.856 "dhchap_digests": [ 00:04:24.856 "sha256", 00:04:24.856 "sha384", 00:04:24.856 "sha512" 00:04:24.856 ], 00:04:24.856 "dhchap_dhgroups": [ 00:04:24.856 "null", 00:04:24.856 "ffdhe2048", 00:04:24.856 "ffdhe3072", 00:04:24.856 "ffdhe4096", 00:04:24.856 "ffdhe6144", 00:04:24.856 "ffdhe8192" 00:04:24.856 ] 00:04:24.856 } 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "method": "bdev_nvme_set_hotplug", 00:04:24.856 "params": { 00:04:24.856 "period_us": 100000, 00:04:24.856 "enable": false 00:04:24.856 } 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "method": "bdev_wait_for_examine" 00:04:24.856 } 00:04:24.856 ] 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "subsystem": "scsi", 00:04:24.856 "config": null 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "subsystem": "scheduler", 00:04:24.856 "config": [ 00:04:24.856 { 00:04:24.856 "method": "framework_set_scheduler", 00:04:24.856 "params": { 00:04:24.856 "name": "static" 00:04:24.856 } 00:04:24.856 } 00:04:24.856 ] 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "subsystem": "vhost_scsi", 00:04:24.856 "config": [] 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "subsystem": "vhost_blk", 00:04:24.856 "config": [] 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "subsystem": "ublk", 00:04:24.856 "config": [] 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "subsystem": "nbd", 00:04:24.856 "config": [] 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "subsystem": "nvmf", 00:04:24.856 "config": [ 00:04:24.856 { 00:04:24.856 "method": "nvmf_set_config", 00:04:24.856 "params": { 00:04:24.856 "discovery_filter": "match_any", 00:04:24.856 "admin_cmd_passthru": { 00:04:24.856 "identify_ctrlr": false 00:04:24.856 } 00:04:24.856 } 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "method": "nvmf_set_max_subsystems", 00:04:24.856 "params": { 00:04:24.856 "max_subsystems": 1024 00:04:24.856 } 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "method": "nvmf_set_crdt", 00:04:24.856 "params": { 00:04:24.856 "crdt1": 0, 00:04:24.856 "crdt2": 0, 00:04:24.856 "crdt3": 0 00:04:24.856 } 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "method": "nvmf_create_transport", 00:04:24.856 "params": { 00:04:24.856 "trtype": "TCP", 00:04:24.856 "max_queue_depth": 128, 00:04:24.856 "max_io_qpairs_per_ctrlr": 127, 00:04:24.856 "in_capsule_data_size": 4096, 00:04:24.856 "max_io_size": 131072, 00:04:24.856 "io_unit_size": 131072, 00:04:24.856 "max_aq_depth": 128, 00:04:24.856 "num_shared_buffers": 511, 00:04:24.856 "buf_cache_size": 4294967295, 00:04:24.856 "dif_insert_or_strip": false, 00:04:24.856 "zcopy": false, 00:04:24.856 "c2h_success": true, 00:04:24.856 "sock_priority": 0, 00:04:24.856 "abort_timeout_sec": 1, 00:04:24.856 "ack_timeout": 0, 00:04:24.856 "data_wr_pool_size": 0 00:04:24.856 } 00:04:24.856 } 00:04:24.856 ] 00:04:24.856 }, 00:04:24.856 { 00:04:24.856 "subsystem": "iscsi", 00:04:24.856 "config": [ 00:04:24.856 { 00:04:24.856 "method": "iscsi_set_options", 00:04:24.856 "params": { 00:04:24.856 "node_base": "iqn.2016-06.io.spdk", 00:04:24.856 "max_sessions": 128, 00:04:24.856 "max_connections_per_session": 2, 00:04:24.856 "max_queue_depth": 64, 00:04:24.856 "default_time2wait": 2, 00:04:24.856 "default_time2retain": 20, 00:04:24.856 "first_burst_length": 8192, 00:04:24.856 "immediate_data": true, 00:04:24.856 "allow_duplicated_isid": false, 00:04:24.856 "error_recovery_level": 0, 00:04:24.856 "nop_timeout": 60, 00:04:24.856 "nop_in_interval": 30, 00:04:24.856 "disable_chap": false, 00:04:24.856 "require_chap": false, 00:04:24.856 "mutual_chap": false, 00:04:24.856 "chap_group": 0, 00:04:24.856 "max_large_datain_per_connection": 64, 00:04:24.856 "max_r2t_per_connection": 4, 00:04:24.856 "pdu_pool_size": 36864, 00:04:24.856 "immediate_data_pool_size": 16384, 00:04:24.856 "data_out_pool_size": 2048 00:04:24.856 } 00:04:24.856 } 00:04:24.856 ] 00:04:24.856 } 00:04:24.856 ] 00:04:24.856 } 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 746660 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 746660 ']' 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 746660 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 746660 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 746660' 00:04:24.856 killing process with pid 746660 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 746660 00:04:24.856 00:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 746660 00:04:25.459 00:19:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=746815 00:04:25.459 00:19:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:25.459 00:19:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:30.720 00:19:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 746815 00:04:30.720 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 746815 ']' 00:04:30.720 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 746815 00:04:30.720 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:30.720 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:30.720 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 746815 00:04:30.720 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:30.720 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:30.720 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 746815' 00:04:30.720 killing process with pid 746815 00:04:30.720 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 746815 00:04:30.720 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 746815 00:04:30.979 00:19:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:30.979 00:19:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:30.979 00:04:30.979 real 0m7.153s 00:04:30.979 user 0m6.921s 00:04:30.980 sys 0m0.754s 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.980 ************************************ 00:04:30.980 END TEST skip_rpc_with_json 00:04:30.980 ************************************ 00:04:30.980 00:19:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:30.980 00:19:56 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:30.980 00:19:56 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:30.980 00:19:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.980 ************************************ 00:04:30.980 START TEST skip_rpc_with_delay 00:04:30.980 ************************************ 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:30.980 00:19:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:30.980 [2024-05-15 00:19:57.011447] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:30.980 [2024-05-15 00:19:57.011565] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:30.980 00:19:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:04:30.980 00:19:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:30.980 00:19:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:30.980 00:19:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:30.980 00:04:30.980 real 0m0.069s 00:04:30.980 user 0m0.039s 00:04:30.980 sys 0m0.029s 00:04:30.980 00:19:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:30.980 00:19:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:30.980 ************************************ 00:04:30.980 END TEST skip_rpc_with_delay 00:04:30.980 ************************************ 00:04:30.980 00:19:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:30.980 00:19:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:30.980 00:19:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:30.980 00:19:57 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:30.980 00:19:57 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:30.980 00:19:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.980 ************************************ 00:04:30.980 START TEST exit_on_failed_rpc_init 00:04:30.980 ************************************ 00:04:30.980 00:19:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:04:30.980 00:19:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=747529 00:04:30.980 00:19:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.980 00:19:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 747529 00:04:30.980 00:19:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 747529 ']' 00:04:30.980 00:19:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.980 00:19:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:30.980 00:19:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.980 00:19:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:30.980 00:19:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.980 [2024-05-15 00:19:57.133719] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:30.980 [2024-05-15 00:19:57.133823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747529 ] 00:04:31.238 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.238 [2024-05-15 00:19:57.208018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.238 [2024-05-15 00:19:57.325189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:32.172 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.172 [2024-05-15 00:19:58.150435] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:32.172 [2024-05-15 00:19:58.150524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747667 ] 00:04:32.172 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.172 [2024-05-15 00:19:58.222785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.430 [2024-05-15 00:19:58.344054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.430 [2024-05-15 00:19:58.344147] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:32.430 [2024-05-15 00:19:58.344167] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:32.430 [2024-05-15 00:19:58.344180] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 747529 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 747529 ']' 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 747529 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 747529 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 747529' 00:04:32.430 killing process with pid 747529 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 747529 00:04:32.430 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 747529 00:04:32.997 00:04:32.997 real 0m1.899s 00:04:32.997 user 0m2.292s 00:04:32.997 sys 0m0.498s 00:04:32.997 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:32.997 00:19:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.997 ************************************ 00:04:32.997 END TEST exit_on_failed_rpc_init 00:04:32.997 ************************************ 00:04:32.997 00:19:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:32.997 00:04:32.997 real 0m14.890s 00:04:32.997 user 0m14.509s 00:04:32.997 sys 0m1.802s 00:04:32.997 00:19:59 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:32.997 00:19:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.997 ************************************ 00:04:32.997 END TEST skip_rpc 00:04:32.997 ************************************ 00:04:32.997 00:19:59 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:32.997 00:19:59 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:32.997 00:19:59 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:32.997 00:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:32.997 ************************************ 00:04:32.997 START TEST rpc_client 00:04:32.997 ************************************ 00:04:32.997 00:19:59 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:32.997 * Looking for test storage... 00:04:32.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:32.997 00:19:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:32.997 OK 00:04:32.997 00:19:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:32.997 00:04:32.997 real 0m0.068s 00:04:32.997 user 0m0.030s 00:04:32.997 sys 0m0.042s 00:04:32.997 00:19:59 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:32.997 00:19:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:32.997 ************************************ 00:04:32.997 END TEST rpc_client 00:04:32.997 ************************************ 00:04:32.997 00:19:59 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:32.997 00:19:59 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:32.997 00:19:59 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:32.997 00:19:59 -- common/autotest_common.sh@10 -- # set +x 00:04:33.256 ************************************ 00:04:33.256 START TEST json_config 00:04:33.256 ************************************ 00:04:33.256 00:19:59 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:33.256 00:19:59 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.256 00:19:59 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.256 00:19:59 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.256 00:19:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.256 00:19:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.256 00:19:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.256 00:19:59 json_config -- paths/export.sh@5 -- # export PATH 00:04:33.256 00:19:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@47 -- # : 0 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:33.256 00:19:59 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:33.256 INFO: JSON configuration test init 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:33.256 00:19:59 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:33.256 00:19:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:33.256 00:19:59 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:33.256 00:19:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.256 00:19:59 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:33.256 00:19:59 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.256 00:19:59 json_config -- json_config/common.sh@10 -- # shift 00:04:33.256 00:19:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.256 00:19:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.256 00:19:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.256 00:19:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.256 00:19:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.256 00:19:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=747911 00:04:33.257 00:19:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:33.257 00:19:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.257 Waiting for target to run... 00:04:33.257 00:19:59 json_config -- json_config/common.sh@25 -- # waitforlisten 747911 /var/tmp/spdk_tgt.sock 00:04:33.257 00:19:59 json_config -- common/autotest_common.sh@828 -- # '[' -z 747911 ']' 00:04:33.257 00:19:59 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.257 00:19:59 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:33.257 00:19:59 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.257 00:19:59 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:33.257 00:19:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.257 [2024-05-15 00:19:59.273369] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:33.257 [2024-05-15 00:19:59.273470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747911 ] 00:04:33.257 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.824 [2024-05-15 00:19:59.783900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.824 [2024-05-15 00:19:59.890961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.082 00:20:00 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:34.082 00:20:00 json_config -- common/autotest_common.sh@861 -- # return 0 00:04:34.082 00:20:00 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.082 00:04:34.082 00:20:00 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:34.082 00:20:00 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:34.082 00:20:00 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:34.082 00:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.082 00:20:00 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:34.082 00:20:00 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:34.082 00:20:00 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:34.082 00:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.082 00:20:00 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:34.082 00:20:00 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:34.082 00:20:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:37.362 00:20:03 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:37.362 00:20:03 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:37.362 00:20:03 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:37.362 00:20:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.362 00:20:03 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:37.362 00:20:03 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:37.362 00:20:03 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:37.362 00:20:03 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:37.362 00:20:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:37.362 00:20:03 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:37.620 00:20:03 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:37.620 00:20:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:37.620 00:20:03 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:37.620 00:20:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:37.620 00:20:03 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:37.620 00:20:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:37.877 MallocForNvmf0 00:04:37.877 00:20:03 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:37.877 00:20:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:38.134 MallocForNvmf1 00:04:38.134 00:20:04 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:38.134 00:20:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:38.391 [2024-05-15 00:20:04.416491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:38.391 00:20:04 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:38.391 00:20:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:38.648 00:20:04 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:38.648 00:20:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:38.905 00:20:04 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.905 00:20:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:39.162 00:20:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:39.162 00:20:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:39.420 [2024-05-15 00:20:05.399224] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:39.420 [2024-05-15 00:20:05.399797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:39.420 00:20:05 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:39.420 00:20:05 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:39.420 00:20:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.420 00:20:05 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:39.420 00:20:05 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:39.420 00:20:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.420 00:20:05 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:39.420 00:20:05 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:39.420 00:20:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:39.678 MallocBdevForConfigChangeCheck 00:04:39.678 00:20:05 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:39.678 00:20:05 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:39.678 00:20:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.678 00:20:05 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:39.678 00:20:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.935 00:20:06 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:39.935 INFO: shutting down applications... 00:04:39.935 00:20:06 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:39.935 00:20:06 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:39.935 00:20:06 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:39.936 00:20:06 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:41.833 Calling clear_iscsi_subsystem 00:04:41.833 Calling clear_nvmf_subsystem 00:04:41.833 Calling clear_nbd_subsystem 00:04:41.833 Calling clear_ublk_subsystem 00:04:41.833 Calling clear_vhost_blk_subsystem 00:04:41.833 Calling clear_vhost_scsi_subsystem 00:04:41.833 Calling clear_bdev_subsystem 00:04:41.833 00:20:07 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:41.833 00:20:07 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:41.833 00:20:07 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:41.833 00:20:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.833 00:20:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:41.833 00:20:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:42.091 00:20:08 json_config -- json_config/json_config.sh@345 -- # break 00:04:42.091 00:20:08 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:42.091 00:20:08 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:42.091 00:20:08 json_config -- json_config/common.sh@31 -- # local app=target 00:04:42.091 00:20:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.091 00:20:08 json_config -- json_config/common.sh@35 -- # [[ -n 747911 ]] 00:04:42.091 00:20:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 747911 00:04:42.091 [2024-05-15 00:20:08.117622] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:42.091 00:20:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.091 00:20:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.091 00:20:08 json_config -- json_config/common.sh@41 -- # kill -0 747911 00:04:42.091 00:20:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.658 00:20:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.658 00:20:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.658 00:20:08 json_config -- json_config/common.sh@41 -- # kill -0 747911 00:04:42.658 00:20:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:42.658 00:20:08 json_config -- json_config/common.sh@43 -- # break 00:04:42.658 00:20:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:42.658 00:20:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:42.658 SPDK target shutdown done 00:04:42.658 00:20:08 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:42.658 INFO: relaunching applications... 00:04:42.658 00:20:08 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.658 00:20:08 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.658 00:20:08 json_config -- json_config/common.sh@10 -- # shift 00:04:42.658 00:20:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.658 00:20:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.658 00:20:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.658 00:20:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.658 00:20:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.658 00:20:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=749188 00:04:42.658 00:20:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.658 00:20:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.658 Waiting for target to run... 00:04:42.658 00:20:08 json_config -- json_config/common.sh@25 -- # waitforlisten 749188 /var/tmp/spdk_tgt.sock 00:04:42.658 00:20:08 json_config -- common/autotest_common.sh@828 -- # '[' -z 749188 ']' 00:04:42.658 00:20:08 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.658 00:20:08 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:42.658 00:20:08 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.658 00:20:08 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:42.658 00:20:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.658 [2024-05-15 00:20:08.674729] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:42.658 [2024-05-15 00:20:08.674845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749188 ] 00:04:42.658 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.286 [2024-05-15 00:20:09.210833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.286 [2024-05-15 00:20:09.320119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.579 [2024-05-15 00:20:12.363873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.579 [2024-05-15 00:20:12.395838] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:46.579 [2024-05-15 00:20:12.396355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.144 00:20:13 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:47.144 00:20:13 json_config -- common/autotest_common.sh@861 -- # return 0 00:04:47.144 00:20:13 json_config -- json_config/common.sh@26 -- # echo '' 00:04:47.144 00:04:47.144 00:20:13 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:47.144 00:20:13 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:47.144 INFO: Checking if target configuration is the same... 00:04:47.144 00:20:13 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.144 00:20:13 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:47.144 00:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.144 + '[' 2 -ne 2 ']' 00:04:47.144 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:47.144 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:47.144 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:47.144 +++ basename /dev/fd/62 00:04:47.144 ++ mktemp /tmp/62.XXX 00:04:47.144 + tmp_file_1=/tmp/62.FY9 00:04:47.144 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.144 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:47.144 + tmp_file_2=/tmp/spdk_tgt_config.json.rIT 00:04:47.144 + ret=0 00:04:47.144 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.401 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.401 + diff -u /tmp/62.FY9 /tmp/spdk_tgt_config.json.rIT 00:04:47.401 + echo 'INFO: JSON config files are the same' 00:04:47.401 INFO: JSON config files are the same 00:04:47.401 + rm /tmp/62.FY9 /tmp/spdk_tgt_config.json.rIT 00:04:47.401 + exit 0 00:04:47.401 00:20:13 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:47.401 00:20:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:47.401 INFO: changing configuration and checking if this can be detected... 00:04:47.401 00:20:13 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:47.401 00:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:47.658 00:20:13 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.658 00:20:13 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:47.658 00:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.658 + '[' 2 -ne 2 ']' 00:04:47.658 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:47.658 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:47.658 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:47.658 +++ basename /dev/fd/62 00:04:47.658 ++ mktemp /tmp/62.XXX 00:04:47.658 + tmp_file_1=/tmp/62.8kz 00:04:47.658 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.658 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:47.658 + tmp_file_2=/tmp/spdk_tgt_config.json.FLm 00:04:47.658 + ret=0 00:04:47.658 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.222 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.222 + diff -u /tmp/62.8kz /tmp/spdk_tgt_config.json.FLm 00:04:48.222 + ret=1 00:04:48.222 + echo '=== Start of file: /tmp/62.8kz ===' 00:04:48.222 + cat /tmp/62.8kz 00:04:48.222 + echo '=== End of file: /tmp/62.8kz ===' 00:04:48.222 + echo '' 00:04:48.222 + echo '=== Start of file: /tmp/spdk_tgt_config.json.FLm ===' 00:04:48.222 + cat /tmp/spdk_tgt_config.json.FLm 00:04:48.222 + echo '=== End of file: /tmp/spdk_tgt_config.json.FLm ===' 00:04:48.222 + echo '' 00:04:48.222 + rm /tmp/62.8kz /tmp/spdk_tgt_config.json.FLm 00:04:48.222 + exit 1 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:48.222 INFO: configuration change detected. 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@317 -- # [[ -n 749188 ]] 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.222 00:20:14 json_config -- json_config/json_config.sh@323 -- # killprocess 749188 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@947 -- # '[' -z 749188 ']' 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@951 -- # kill -0 749188 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@952 -- # uname 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 749188 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 749188' 00:04:48.222 killing process with pid 749188 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@966 -- # kill 749188 00:04:48.222 [2024-05-15 00:20:14.252798] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:48.222 00:20:14 json_config -- common/autotest_common.sh@971 -- # wait 749188 00:04:50.122 00:20:15 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.122 00:20:15 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:50.122 00:20:15 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:50.122 00:20:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.122 00:20:15 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:50.122 00:20:15 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:50.122 INFO: Success 00:04:50.122 00:04:50.122 real 0m16.766s 00:04:50.122 user 0m18.511s 00:04:50.122 sys 0m2.277s 00:04:50.122 00:20:15 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:50.122 00:20:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.122 ************************************ 00:04:50.122 END TEST json_config 00:04:50.122 ************************************ 00:04:50.122 00:20:15 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.122 00:20:15 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:50.122 00:20:15 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:50.122 00:20:15 -- common/autotest_common.sh@10 -- # set +x 00:04:50.122 ************************************ 00:04:50.122 START TEST json_config_extra_key 00:04:50.122 ************************************ 00:04:50.122 00:20:15 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.122 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.122 00:20:16 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:50.122 00:20:16 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.122 00:20:16 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.122 00:20:16 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.122 00:20:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.122 00:20:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.122 00:20:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.122 00:20:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:50.123 00:20:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.123 00:20:16 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:50.123 00:20:16 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:50.123 00:20:16 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:50.123 00:20:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.123 00:20:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.123 00:20:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.123 00:20:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:50.123 00:20:16 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:50.123 00:20:16 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:50.123 INFO: launching applications... 00:04:50.123 00:20:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.123 00:20:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:50.123 00:20:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:50.123 00:20:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.123 00:20:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.123 00:20:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.123 00:20:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.123 00:20:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.123 00:20:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=750145 00:04:50.123 00:20:16 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.123 00:20:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.123 Waiting for target to run... 00:04:50.123 00:20:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 750145 /var/tmp/spdk_tgt.sock 00:04:50.123 00:20:16 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 750145 ']' 00:04:50.123 00:20:16 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.123 00:20:16 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:50.123 00:20:16 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.123 00:20:16 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:50.123 00:20:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.123 [2024-05-15 00:20:16.101308] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:50.123 [2024-05-15 00:20:16.101426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750145 ] 00:04:50.123 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.692 [2024-05-15 00:20:16.614892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.692 [2024-05-15 00:20:16.722646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.950 00:20:17 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:50.950 00:20:17 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:04:50.950 00:20:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:50.950 00:04:50.950 00:20:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:50.950 INFO: shutting down applications... 00:04:50.950 00:20:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:50.950 00:20:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:50.950 00:20:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:50.950 00:20:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 750145 ]] 00:04:50.950 00:20:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 750145 00:04:50.950 00:20:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:50.950 00:20:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.950 00:20:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 750145 00:04:50.950 00:20:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.516 00:20:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.516 00:20:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.516 00:20:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 750145 00:04:51.516 00:20:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.516 00:20:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:51.516 00:20:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.516 00:20:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.516 SPDK target shutdown done 00:04:51.516 00:20:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:51.516 Success 00:04:51.516 00:04:51.516 real 0m1.587s 00:04:51.516 user 0m1.449s 00:04:51.516 sys 0m0.603s 00:04:51.517 00:20:17 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:51.517 00:20:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.517 ************************************ 00:04:51.517 END TEST json_config_extra_key 00:04:51.517 ************************************ 00:04:51.517 00:20:17 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.517 00:20:17 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:51.517 00:20:17 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:51.517 00:20:17 -- common/autotest_common.sh@10 -- # set +x 00:04:51.517 ************************************ 00:04:51.517 START TEST alias_rpc 00:04:51.517 ************************************ 00:04:51.517 00:20:17 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.775 * Looking for test storage... 00:04:51.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:51.775 00:20:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:51.775 00:20:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=750444 00:04:51.775 00:20:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.775 00:20:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 750444 00:04:51.775 00:20:17 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 750444 ']' 00:04:51.775 00:20:17 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.775 00:20:17 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:51.775 00:20:17 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.775 00:20:17 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:51.775 00:20:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.775 [2024-05-15 00:20:17.738052] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:51.775 [2024-05-15 00:20:17.738132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750444 ] 00:04:51.775 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.775 [2024-05-15 00:20:17.804316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.775 [2024-05-15 00:20:17.909973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.034 00:20:18 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:52.034 00:20:18 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:04:52.034 00:20:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:52.291 00:20:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 750444 00:04:52.292 00:20:18 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 750444 ']' 00:04:52.292 00:20:18 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 750444 00:04:52.292 00:20:18 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:04:52.292 00:20:18 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:52.292 00:20:18 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 750444 00:04:52.549 00:20:18 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:52.549 00:20:18 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:52.549 00:20:18 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 750444' 00:04:52.549 killing process with pid 750444 00:04:52.549 00:20:18 alias_rpc -- common/autotest_common.sh@966 -- # kill 750444 00:04:52.549 00:20:18 alias_rpc -- common/autotest_common.sh@971 -- # wait 750444 00:04:52.806 00:04:52.806 real 0m1.274s 00:04:52.806 user 0m1.335s 00:04:52.806 sys 0m0.433s 00:04:52.806 00:20:18 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:52.806 00:20:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.806 ************************************ 00:04:52.806 END TEST alias_rpc 00:04:52.806 ************************************ 00:04:52.806 00:20:18 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:52.806 00:20:18 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:52.806 00:20:18 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:52.806 00:20:18 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:52.806 00:20:18 -- common/autotest_common.sh@10 -- # set +x 00:04:52.806 ************************************ 00:04:52.806 START TEST spdkcli_tcp 00:04:52.806 ************************************ 00:04:52.806 00:20:18 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:53.063 * Looking for test storage... 00:04:53.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:53.063 00:20:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:53.063 00:20:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:53.063 00:20:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:53.064 00:20:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:53.064 00:20:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:53.064 00:20:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:53.064 00:20:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:53.064 00:20:19 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:53.064 00:20:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.064 00:20:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=750641 00:04:53.064 00:20:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:53.064 00:20:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 750641 00:04:53.064 00:20:19 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 750641 ']' 00:04:53.064 00:20:19 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.064 00:20:19 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:53.064 00:20:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.064 00:20:19 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:53.064 00:20:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.064 [2024-05-15 00:20:19.069430] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:53.064 [2024-05-15 00:20:19.069529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750641 ] 00:04:53.064 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.064 [2024-05-15 00:20:19.137596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.322 [2024-05-15 00:20:19.248115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.322 [2024-05-15 00:20:19.248121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.887 00:20:20 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:53.887 00:20:20 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:04:53.887 00:20:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=750780 00:04:53.887 00:20:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:53.887 00:20:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:54.146 [ 00:04:54.146 "bdev_malloc_delete", 00:04:54.146 "bdev_malloc_create", 00:04:54.146 "bdev_null_resize", 00:04:54.146 "bdev_null_delete", 00:04:54.146 "bdev_null_create", 00:04:54.146 "bdev_nvme_cuse_unregister", 00:04:54.146 "bdev_nvme_cuse_register", 00:04:54.146 "bdev_opal_new_user", 00:04:54.146 "bdev_opal_set_lock_state", 00:04:54.146 "bdev_opal_delete", 00:04:54.146 "bdev_opal_get_info", 00:04:54.146 "bdev_opal_create", 00:04:54.146 "bdev_nvme_opal_revert", 00:04:54.146 "bdev_nvme_opal_init", 00:04:54.146 "bdev_nvme_send_cmd", 00:04:54.146 "bdev_nvme_get_path_iostat", 00:04:54.146 "bdev_nvme_get_mdns_discovery_info", 00:04:54.146 "bdev_nvme_stop_mdns_discovery", 00:04:54.146 "bdev_nvme_start_mdns_discovery", 00:04:54.146 "bdev_nvme_set_multipath_policy", 00:04:54.146 "bdev_nvme_set_preferred_path", 00:04:54.146 "bdev_nvme_get_io_paths", 00:04:54.146 "bdev_nvme_remove_error_injection", 00:04:54.146 "bdev_nvme_add_error_injection", 00:04:54.146 "bdev_nvme_get_discovery_info", 00:04:54.146 "bdev_nvme_stop_discovery", 00:04:54.146 "bdev_nvme_start_discovery", 00:04:54.146 "bdev_nvme_get_controller_health_info", 00:04:54.146 "bdev_nvme_disable_controller", 00:04:54.146 "bdev_nvme_enable_controller", 00:04:54.146 "bdev_nvme_reset_controller", 00:04:54.146 "bdev_nvme_get_transport_statistics", 00:04:54.146 "bdev_nvme_apply_firmware", 00:04:54.146 "bdev_nvme_detach_controller", 00:04:54.146 "bdev_nvme_get_controllers", 00:04:54.146 "bdev_nvme_attach_controller", 00:04:54.146 "bdev_nvme_set_hotplug", 00:04:54.146 "bdev_nvme_set_options", 00:04:54.146 "bdev_passthru_delete", 00:04:54.147 "bdev_passthru_create", 00:04:54.147 "bdev_lvol_check_shallow_copy", 00:04:54.147 "bdev_lvol_start_shallow_copy", 00:04:54.147 "bdev_lvol_grow_lvstore", 00:04:54.147 "bdev_lvol_get_lvols", 00:04:54.147 "bdev_lvol_get_lvstores", 00:04:54.147 "bdev_lvol_delete", 00:04:54.147 "bdev_lvol_set_read_only", 00:04:54.147 "bdev_lvol_resize", 00:04:54.147 "bdev_lvol_decouple_parent", 00:04:54.147 "bdev_lvol_inflate", 00:04:54.147 "bdev_lvol_rename", 00:04:54.147 "bdev_lvol_clone_bdev", 00:04:54.147 "bdev_lvol_clone", 00:04:54.147 "bdev_lvol_snapshot", 00:04:54.147 "bdev_lvol_create", 00:04:54.147 "bdev_lvol_delete_lvstore", 00:04:54.147 "bdev_lvol_rename_lvstore", 00:04:54.147 "bdev_lvol_create_lvstore", 00:04:54.147 "bdev_raid_set_options", 00:04:54.147 "bdev_raid_remove_base_bdev", 00:04:54.147 "bdev_raid_add_base_bdev", 00:04:54.147 "bdev_raid_delete", 00:04:54.147 "bdev_raid_create", 00:04:54.147 "bdev_raid_get_bdevs", 00:04:54.147 "bdev_error_inject_error", 00:04:54.147 "bdev_error_delete", 00:04:54.147 "bdev_error_create", 00:04:54.147 "bdev_split_delete", 00:04:54.147 "bdev_split_create", 00:04:54.147 "bdev_delay_delete", 00:04:54.147 "bdev_delay_create", 00:04:54.147 "bdev_delay_update_latency", 00:04:54.147 "bdev_zone_block_delete", 00:04:54.147 "bdev_zone_block_create", 00:04:54.147 "blobfs_create", 00:04:54.147 "blobfs_detect", 00:04:54.147 "blobfs_set_cache_size", 00:04:54.147 "bdev_aio_delete", 00:04:54.147 "bdev_aio_rescan", 00:04:54.147 "bdev_aio_create", 00:04:54.147 "bdev_ftl_set_property", 00:04:54.147 "bdev_ftl_get_properties", 00:04:54.147 "bdev_ftl_get_stats", 00:04:54.147 "bdev_ftl_unmap", 00:04:54.147 "bdev_ftl_unload", 00:04:54.147 "bdev_ftl_delete", 00:04:54.147 "bdev_ftl_load", 00:04:54.147 "bdev_ftl_create", 00:04:54.147 "bdev_virtio_attach_controller", 00:04:54.147 "bdev_virtio_scsi_get_devices", 00:04:54.147 "bdev_virtio_detach_controller", 00:04:54.147 "bdev_virtio_blk_set_hotplug", 00:04:54.147 "bdev_iscsi_delete", 00:04:54.147 "bdev_iscsi_create", 00:04:54.147 "bdev_iscsi_set_options", 00:04:54.147 "accel_error_inject_error", 00:04:54.147 "ioat_scan_accel_module", 00:04:54.147 "dsa_scan_accel_module", 00:04:54.147 "iaa_scan_accel_module", 00:04:54.147 "vfu_virtio_create_scsi_endpoint", 00:04:54.147 "vfu_virtio_scsi_remove_target", 00:04:54.147 "vfu_virtio_scsi_add_target", 00:04:54.147 "vfu_virtio_create_blk_endpoint", 00:04:54.147 "vfu_virtio_delete_endpoint", 00:04:54.147 "keyring_file_remove_key", 00:04:54.147 "keyring_file_add_key", 00:04:54.147 "iscsi_get_histogram", 00:04:54.147 "iscsi_enable_histogram", 00:04:54.147 "iscsi_set_options", 00:04:54.147 "iscsi_get_auth_groups", 00:04:54.147 "iscsi_auth_group_remove_secret", 00:04:54.147 "iscsi_auth_group_add_secret", 00:04:54.147 "iscsi_delete_auth_group", 00:04:54.147 "iscsi_create_auth_group", 00:04:54.147 "iscsi_set_discovery_auth", 00:04:54.147 "iscsi_get_options", 00:04:54.147 "iscsi_target_node_request_logout", 00:04:54.147 "iscsi_target_node_set_redirect", 00:04:54.147 "iscsi_target_node_set_auth", 00:04:54.147 "iscsi_target_node_add_lun", 00:04:54.147 "iscsi_get_stats", 00:04:54.147 "iscsi_get_connections", 00:04:54.147 "iscsi_portal_group_set_auth", 00:04:54.147 "iscsi_start_portal_group", 00:04:54.147 "iscsi_delete_portal_group", 00:04:54.147 "iscsi_create_portal_group", 00:04:54.147 "iscsi_get_portal_groups", 00:04:54.147 "iscsi_delete_target_node", 00:04:54.147 "iscsi_target_node_remove_pg_ig_maps", 00:04:54.147 "iscsi_target_node_add_pg_ig_maps", 00:04:54.147 "iscsi_create_target_node", 00:04:54.147 "iscsi_get_target_nodes", 00:04:54.147 "iscsi_delete_initiator_group", 00:04:54.147 "iscsi_initiator_group_remove_initiators", 00:04:54.147 "iscsi_initiator_group_add_initiators", 00:04:54.147 "iscsi_create_initiator_group", 00:04:54.147 "iscsi_get_initiator_groups", 00:04:54.147 "nvmf_set_crdt", 00:04:54.147 "nvmf_set_config", 00:04:54.147 "nvmf_set_max_subsystems", 00:04:54.147 "nvmf_stop_mdns_prr", 00:04:54.147 "nvmf_publish_mdns_prr", 00:04:54.147 "nvmf_subsystem_get_listeners", 00:04:54.147 "nvmf_subsystem_get_qpairs", 00:04:54.147 "nvmf_subsystem_get_controllers", 00:04:54.147 "nvmf_get_stats", 00:04:54.147 "nvmf_get_transports", 00:04:54.147 "nvmf_create_transport", 00:04:54.147 "nvmf_get_targets", 00:04:54.147 "nvmf_delete_target", 00:04:54.147 "nvmf_create_target", 00:04:54.147 "nvmf_subsystem_allow_any_host", 00:04:54.147 "nvmf_subsystem_remove_host", 00:04:54.147 "nvmf_subsystem_add_host", 00:04:54.147 "nvmf_ns_remove_host", 00:04:54.147 "nvmf_ns_add_host", 00:04:54.147 "nvmf_subsystem_remove_ns", 00:04:54.147 "nvmf_subsystem_add_ns", 00:04:54.147 "nvmf_subsystem_listener_set_ana_state", 00:04:54.147 "nvmf_discovery_get_referrals", 00:04:54.147 "nvmf_discovery_remove_referral", 00:04:54.147 "nvmf_discovery_add_referral", 00:04:54.147 "nvmf_subsystem_remove_listener", 00:04:54.147 "nvmf_subsystem_add_listener", 00:04:54.147 "nvmf_delete_subsystem", 00:04:54.147 "nvmf_create_subsystem", 00:04:54.147 "nvmf_get_subsystems", 00:04:54.147 "env_dpdk_get_mem_stats", 00:04:54.147 "nbd_get_disks", 00:04:54.147 "nbd_stop_disk", 00:04:54.147 "nbd_start_disk", 00:04:54.147 "ublk_recover_disk", 00:04:54.147 "ublk_get_disks", 00:04:54.147 "ublk_stop_disk", 00:04:54.147 "ublk_start_disk", 00:04:54.147 "ublk_destroy_target", 00:04:54.147 "ublk_create_target", 00:04:54.147 "virtio_blk_create_transport", 00:04:54.147 "virtio_blk_get_transports", 00:04:54.147 "vhost_controller_set_coalescing", 00:04:54.147 "vhost_get_controllers", 00:04:54.147 "vhost_delete_controller", 00:04:54.147 "vhost_create_blk_controller", 00:04:54.147 "vhost_scsi_controller_remove_target", 00:04:54.147 "vhost_scsi_controller_add_target", 00:04:54.147 "vhost_start_scsi_controller", 00:04:54.147 "vhost_create_scsi_controller", 00:04:54.147 "thread_set_cpumask", 00:04:54.147 "framework_get_scheduler", 00:04:54.147 "framework_set_scheduler", 00:04:54.147 "framework_get_reactors", 00:04:54.147 "thread_get_io_channels", 00:04:54.147 "thread_get_pollers", 00:04:54.147 "thread_get_stats", 00:04:54.147 "framework_monitor_context_switch", 00:04:54.147 "spdk_kill_instance", 00:04:54.147 "log_enable_timestamps", 00:04:54.147 "log_get_flags", 00:04:54.147 "log_clear_flag", 00:04:54.147 "log_set_flag", 00:04:54.147 "log_get_level", 00:04:54.147 "log_set_level", 00:04:54.147 "log_get_print_level", 00:04:54.147 "log_set_print_level", 00:04:54.147 "framework_enable_cpumask_locks", 00:04:54.147 "framework_disable_cpumask_locks", 00:04:54.147 "framework_wait_init", 00:04:54.147 "framework_start_init", 00:04:54.147 "scsi_get_devices", 00:04:54.147 "bdev_get_histogram", 00:04:54.147 "bdev_enable_histogram", 00:04:54.147 "bdev_set_qos_limit", 00:04:54.147 "bdev_set_qd_sampling_period", 00:04:54.147 "bdev_get_bdevs", 00:04:54.147 "bdev_reset_iostat", 00:04:54.147 "bdev_get_iostat", 00:04:54.147 "bdev_examine", 00:04:54.147 "bdev_wait_for_examine", 00:04:54.147 "bdev_set_options", 00:04:54.147 "notify_get_notifications", 00:04:54.147 "notify_get_types", 00:04:54.147 "accel_get_stats", 00:04:54.147 "accel_set_options", 00:04:54.147 "accel_set_driver", 00:04:54.147 "accel_crypto_key_destroy", 00:04:54.147 "accel_crypto_keys_get", 00:04:54.147 "accel_crypto_key_create", 00:04:54.147 "accel_assign_opc", 00:04:54.147 "accel_get_module_info", 00:04:54.147 "accel_get_opc_assignments", 00:04:54.147 "vmd_rescan", 00:04:54.147 "vmd_remove_device", 00:04:54.147 "vmd_enable", 00:04:54.147 "sock_get_default_impl", 00:04:54.147 "sock_set_default_impl", 00:04:54.147 "sock_impl_set_options", 00:04:54.147 "sock_impl_get_options", 00:04:54.147 "iobuf_get_stats", 00:04:54.147 "iobuf_set_options", 00:04:54.147 "keyring_get_keys", 00:04:54.147 "framework_get_pci_devices", 00:04:54.147 "framework_get_config", 00:04:54.147 "framework_get_subsystems", 00:04:54.147 "vfu_tgt_set_base_path", 00:04:54.147 "trace_get_info", 00:04:54.147 "trace_get_tpoint_group_mask", 00:04:54.147 "trace_disable_tpoint_group", 00:04:54.147 "trace_enable_tpoint_group", 00:04:54.147 "trace_clear_tpoint_mask", 00:04:54.147 "trace_set_tpoint_mask", 00:04:54.147 "spdk_get_version", 00:04:54.147 "rpc_get_methods" 00:04:54.147 ] 00:04:54.147 00:20:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:54.147 00:20:20 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:54.147 00:20:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.147 00:20:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:54.147 00:20:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 750641 00:04:54.147 00:20:20 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 750641 ']' 00:04:54.147 00:20:20 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 750641 00:04:54.147 00:20:20 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:04:54.147 00:20:20 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:54.147 00:20:20 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 750641 00:04:54.406 00:20:20 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:54.406 00:20:20 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:54.406 00:20:20 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 750641' 00:04:54.406 killing process with pid 750641 00:04:54.406 00:20:20 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 750641 00:04:54.406 00:20:20 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 750641 00:04:54.664 00:04:54.664 real 0m1.805s 00:04:54.664 user 0m3.454s 00:04:54.664 sys 0m0.493s 00:04:54.664 00:20:20 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:54.664 00:20:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.664 ************************************ 00:04:54.664 END TEST spdkcli_tcp 00:04:54.664 ************************************ 00:04:54.664 00:20:20 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.664 00:20:20 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:54.664 00:20:20 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:54.664 00:20:20 -- common/autotest_common.sh@10 -- # set +x 00:04:54.664 ************************************ 00:04:54.664 START TEST dpdk_mem_utility 00:04:54.664 ************************************ 00:04:54.664 00:20:20 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.922 * Looking for test storage... 00:04:54.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:54.922 00:20:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:54.922 00:20:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=750858 00:04:54.922 00:20:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.923 00:20:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 750858 00:04:54.923 00:20:20 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 750858 ']' 00:04:54.923 00:20:20 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.923 00:20:20 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:54.923 00:20:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.923 00:20:20 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:54.923 00:20:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.923 [2024-05-15 00:20:20.925698] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:54.923 [2024-05-15 00:20:20.925779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750858 ] 00:04:54.923 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.923 [2024-05-15 00:20:20.995380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.181 [2024-05-15 00:20:21.103713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.439 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:55.439 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:04:55.439 00:20:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:55.439 00:20:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:55.439 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:55.439 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.439 { 00:04:55.439 "filename": "/tmp/spdk_mem_dump.txt" 00:04:55.439 } 00:04:55.439 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:55.439 00:20:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:55.439 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:55.439 1 heaps totaling size 814.000000 MiB 00:04:55.439 size: 814.000000 MiB heap id: 0 00:04:55.439 end heaps---------- 00:04:55.439 8 mempools totaling size 598.116089 MiB 00:04:55.439 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:55.439 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:55.439 size: 84.521057 MiB name: bdev_io_750858 00:04:55.439 size: 51.011292 MiB name: evtpool_750858 00:04:55.439 size: 50.003479 MiB name: msgpool_750858 00:04:55.439 size: 21.763794 MiB name: PDU_Pool 00:04:55.439 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:55.439 size: 0.026123 MiB name: Session_Pool 00:04:55.439 end mempools------- 00:04:55.439 6 memzones totaling size 4.142822 MiB 00:04:55.439 size: 1.000366 MiB name: RG_ring_0_750858 00:04:55.439 size: 1.000366 MiB name: RG_ring_1_750858 00:04:55.439 size: 1.000366 MiB name: RG_ring_4_750858 00:04:55.440 size: 1.000366 MiB name: RG_ring_5_750858 00:04:55.440 size: 0.125366 MiB name: RG_ring_2_750858 00:04:55.440 size: 0.015991 MiB name: RG_ring_3_750858 00:04:55.440 end memzones------- 00:04:55.440 00:20:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:55.440 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:55.440 list of free elements. size: 12.519348 MiB 00:04:55.440 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:55.440 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:55.440 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:55.440 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:55.440 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:55.440 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:55.440 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:55.440 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:55.440 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:55.440 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:55.440 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:55.440 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:55.440 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:55.440 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:55.440 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:55.440 list of standard malloc elements. size: 199.218079 MiB 00:04:55.440 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:55.440 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:55.440 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:55.440 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:55.440 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:55.440 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:55.440 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:55.440 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:55.440 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:55.440 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:55.440 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:55.440 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:55.440 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:55.440 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:55.440 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:55.440 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:55.440 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:55.440 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:55.440 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:55.440 list of memzone associated elements. size: 602.262573 MiB 00:04:55.440 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:55.440 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:55.440 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:55.440 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:55.440 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:55.440 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_750858_0 00:04:55.440 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:55.440 associated memzone info: size: 48.002930 MiB name: MP_evtpool_750858_0 00:04:55.440 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:55.440 associated memzone info: size: 48.002930 MiB name: MP_msgpool_750858_0 00:04:55.440 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:55.440 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:55.440 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:55.440 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:55.440 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:55.440 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_750858 00:04:55.440 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:55.440 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_750858 00:04:55.440 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:55.440 associated memzone info: size: 1.007996 MiB name: MP_evtpool_750858 00:04:55.440 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:55.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:55.440 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:55.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:55.440 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:55.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:55.440 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:55.440 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:55.440 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:55.440 associated memzone info: size: 1.000366 MiB name: RG_ring_0_750858 00:04:55.440 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:55.440 associated memzone info: size: 1.000366 MiB name: RG_ring_1_750858 00:04:55.440 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:55.440 associated memzone info: size: 1.000366 MiB name: RG_ring_4_750858 00:04:55.440 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:55.440 associated memzone info: size: 1.000366 MiB name: RG_ring_5_750858 00:04:55.440 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:55.440 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_750858 00:04:55.440 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:55.440 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:55.440 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:55.440 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:55.440 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:55.440 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:55.440 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:55.440 associated memzone info: size: 0.125366 MiB name: RG_ring_2_750858 00:04:55.440 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:55.440 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:55.440 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:55.440 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:55.440 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:55.440 associated memzone info: size: 0.015991 MiB name: RG_ring_3_750858 00:04:55.440 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:55.440 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:55.440 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:55.440 associated memzone info: size: 0.000183 MiB name: MP_msgpool_750858 00:04:55.440 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:55.440 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_750858 00:04:55.440 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:55.440 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:55.440 00:20:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:55.440 00:20:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 750858 00:04:55.440 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 750858 ']' 00:04:55.440 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 750858 00:04:55.440 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:04:55.440 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:55.440 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 750858 00:04:55.440 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:55.440 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:55.440 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 750858' 00:04:55.440 killing process with pid 750858 00:04:55.440 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 750858 00:04:55.440 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 750858 00:04:56.007 00:04:56.007 real 0m1.136s 00:04:56.007 user 0m1.086s 00:04:56.007 sys 0m0.421s 00:04:56.007 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:56.007 00:20:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.007 ************************************ 00:04:56.007 END TEST dpdk_mem_utility 00:04:56.007 ************************************ 00:04:56.007 00:20:21 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:56.007 00:20:21 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:56.007 00:20:21 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:56.007 00:20:21 -- common/autotest_common.sh@10 -- # set +x 00:04:56.007 ************************************ 00:04:56.007 START TEST event 00:04:56.007 ************************************ 00:04:56.007 00:20:22 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:56.007 * Looking for test storage... 00:04:56.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:56.007 00:20:22 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:56.007 00:20:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:56.007 00:20:22 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:56.007 00:20:22 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:04:56.007 00:20:22 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:56.007 00:20:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.007 ************************************ 00:04:56.007 START TEST event_perf 00:04:56.007 ************************************ 00:04:56.007 00:20:22 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:56.007 Running I/O for 1 seconds...[2024-05-15 00:20:22.096294] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:56.007 [2024-05-15 00:20:22.096350] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751073 ] 00:04:56.007 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.007 [2024-05-15 00:20:22.166444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:56.265 [2024-05-15 00:20:22.281025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.265 [2024-05-15 00:20:22.281103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.265 [2024-05-15 00:20:22.281100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.265 [2024-05-15 00:20:22.281044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.637 Running I/O for 1 seconds... 00:04:57.637 lcore 0: 226731 00:04:57.637 lcore 1: 226730 00:04:57.637 lcore 2: 226728 00:04:57.637 lcore 3: 226729 00:04:57.637 done. 00:04:57.637 00:04:57.637 real 0m1.322s 00:04:57.637 user 0m4.224s 00:04:57.637 sys 0m0.092s 00:04:57.637 00:20:23 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:57.637 00:20:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:57.637 ************************************ 00:04:57.637 END TEST event_perf 00:04:57.637 ************************************ 00:04:57.637 00:20:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:57.637 00:20:23 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:04:57.637 00:20:23 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:57.637 00:20:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.637 ************************************ 00:04:57.637 START TEST event_reactor 00:04:57.637 ************************************ 00:04:57.637 00:20:23 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:57.637 [2024-05-15 00:20:23.478452] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:57.637 [2024-05-15 00:20:23.478517] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751321 ] 00:04:57.637 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.637 [2024-05-15 00:20:23.554201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.637 [2024-05-15 00:20:23.671300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.036 test_start 00:04:59.036 oneshot 00:04:59.036 tick 100 00:04:59.036 tick 100 00:04:59.036 tick 250 00:04:59.036 tick 100 00:04:59.036 tick 100 00:04:59.036 tick 100 00:04:59.036 tick 250 00:04:59.036 tick 500 00:04:59.036 tick 100 00:04:59.036 tick 100 00:04:59.036 tick 250 00:04:59.036 tick 100 00:04:59.036 tick 100 00:04:59.036 test_end 00:04:59.036 00:04:59.036 real 0m1.332s 00:04:59.036 user 0m1.231s 00:04:59.036 sys 0m0.096s 00:04:59.036 00:20:24 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:59.036 00:20:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:59.036 ************************************ 00:04:59.036 END TEST event_reactor 00:04:59.036 ************************************ 00:04:59.036 00:20:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:59.036 00:20:24 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:04:59.036 00:20:24 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:59.036 00:20:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.037 ************************************ 00:04:59.037 START TEST event_reactor_perf 00:04:59.037 ************************************ 00:04:59.037 00:20:24 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:59.037 [2024-05-15 00:20:24.866281] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:04:59.037 [2024-05-15 00:20:24.866348] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751481 ] 00:04:59.037 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.037 [2024-05-15 00:20:24.941999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.037 [2024-05-15 00:20:25.056806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.412 test_start 00:05:00.412 test_end 00:05:00.412 Performance: 352970 events per second 00:05:00.412 00:05:00.412 real 0m1.324s 00:05:00.412 user 0m1.227s 00:05:00.412 sys 0m0.092s 00:05:00.412 00:20:26 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:00.412 00:20:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.412 ************************************ 00:05:00.412 END TEST event_reactor_perf 00:05:00.412 ************************************ 00:05:00.412 00:20:26 event -- event/event.sh@49 -- # uname -s 00:05:00.412 00:20:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:00.412 00:20:26 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:00.412 00:20:26 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:00.412 00:20:26 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:00.412 00:20:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.412 ************************************ 00:05:00.412 START TEST event_scheduler 00:05:00.412 ************************************ 00:05:00.412 00:20:26 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:00.412 * Looking for test storage... 00:05:00.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:00.412 00:20:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:00.413 00:20:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=751666 00:05:00.413 00:20:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:00.413 00:20:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.413 00:20:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 751666 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 751666 ']' 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.413 [2024-05-15 00:20:26.319783] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:00.413 [2024-05-15 00:20:26.319872] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751666 ] 00:05:00.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.413 [2024-05-15 00:20:26.392411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.413 [2024-05-15 00:20:26.508147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.413 [2024-05-15 00:20:26.508205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.413 [2024-05-15 00:20:26.508280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.413 [2024-05-15 00:20:26.508284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:05:00.413 00:20:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.413 POWER: Env isn't set yet! 00:05:00.413 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:00.413 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:00.413 POWER: Cannot get available frequencies of lcore 0 00:05:00.413 POWER: Attempting to initialise PSTAT power management... 00:05:00.413 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:00.413 POWER: Initialized successfully for lcore 0 power management 00:05:00.413 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:00.413 POWER: Initialized successfully for lcore 1 power management 00:05:00.413 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:00.413 POWER: Initialized successfully for lcore 2 power management 00:05:00.413 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:00.413 POWER: Initialized successfully for lcore 3 power management 00:05:00.413 [2024-05-15 00:20:26.570144] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:00.413 [2024-05-15 00:20:26.570161] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:00.413 [2024-05-15 00:20:26.570172] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.413 00:20:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.413 00:20:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.671 [2024-05-15 00:20:26.672115] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:00.671 00:20:26 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.671 00:20:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:00.671 00:20:26 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:00.671 00:20:26 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:00.671 00:20:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.671 ************************************ 00:05:00.671 START TEST scheduler_create_thread 00:05:00.671 ************************************ 00:05:00.671 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.672 2 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.672 3 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.672 4 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.672 5 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.672 6 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.672 7 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.672 8 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.672 9 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.672 10 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.672 00:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.237 00:20:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.237 00:20:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.237 00:20:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.237 00:20:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.237 00:20:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.171 00:20:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:02.171 00:20:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:02.171 00:20:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:02.171 00:20:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.104 00:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:03.104 00:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:03.104 00:20:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:03.104 00:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:03.104 00:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.037 00:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:04.037 00:05:04.037 real 0m3.229s 00:05:04.037 user 0m0.011s 00:05:04.037 sys 0m0.005s 00:05:04.037 00:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:04.037 00:20:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.037 ************************************ 00:05:04.037 END TEST scheduler_create_thread 00:05:04.037 ************************************ 00:05:04.037 00:20:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:04.037 00:20:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 751666 00:05:04.037 00:20:29 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 751666 ']' 00:05:04.037 00:20:29 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 751666 00:05:04.037 00:20:29 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:05:04.037 00:20:29 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:04.037 00:20:29 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 751666 00:05:04.037 00:20:29 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:05:04.037 00:20:29 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:05:04.037 00:20:29 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 751666' 00:05:04.037 killing process with pid 751666 00:05:04.037 00:20:29 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 751666 00:05:04.037 00:20:29 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 751666 00:05:04.295 [2024-05-15 00:20:30.314295] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:04.554 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:04.554 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:04.554 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:05:04.554 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:04.554 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:05:04.554 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:04.554 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:05:04.554 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:04.554 00:05:04.554 real 0m4.419s 00:05:04.554 user 0m7.613s 00:05:04.554 sys 0m0.342s 00:05:04.554 00:20:30 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:04.554 00:20:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.554 ************************************ 00:05:04.554 END TEST event_scheduler 00:05:04.554 ************************************ 00:05:04.554 00:20:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:04.554 00:20:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:04.554 00:20:30 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:04.554 00:20:30 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:04.554 00:20:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.554 ************************************ 00:05:04.554 START TEST app_repeat 00:05:04.554 ************************************ 00:05:04.554 00:20:30 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=752249 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 752249' 00:05:04.554 Process app_repeat pid: 752249 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:04.554 spdk_app_start Round 0 00:05:04.554 00:20:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 752249 /var/tmp/spdk-nbd.sock 00:05:04.554 00:20:30 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 752249 ']' 00:05:04.554 00:20:30 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.554 00:20:30 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:04.554 00:20:30 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.554 00:20:30 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:04.554 00:20:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.813 [2024-05-15 00:20:30.733975] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:04.813 [2024-05-15 00:20:30.734053] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid752249 ] 00:05:04.813 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.813 [2024-05-15 00:20:30.808806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.813 [2024-05-15 00:20:30.924957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.813 [2024-05-15 00:20:30.924963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.070 00:20:31 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:05.070 00:20:31 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:05.070 00:20:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.328 Malloc0 00:05:05.328 00:20:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.586 Malloc1 00:05:05.586 00:20:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.586 00:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.587 00:20:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.844 /dev/nbd0 00:05:05.844 00:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.844 00:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.844 1+0 records in 00:05:05.844 1+0 records out 00:05:05.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160349 s, 25.5 MB/s 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:05.844 00:20:31 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:05.844 00:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.844 00:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.844 00:20:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.102 /dev/nbd1 00:05:06.102 00:20:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.102 00:20:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.102 00:20:32 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:06.102 00:20:32 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:06.102 00:20:32 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:06.102 00:20:32 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:06.102 00:20:32 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:06.102 00:20:32 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:06.102 00:20:32 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:06.102 00:20:32 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:06.102 00:20:32 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.102 1+0 records in 00:05:06.102 1+0 records out 00:05:06.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158852 s, 25.8 MB/s 00:05:06.103 00:20:32 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.103 00:20:32 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:06.103 00:20:32 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.103 00:20:32 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:06.103 00:20:32 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:06.103 00:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.103 00:20:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.103 00:20:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.103 00:20:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.103 00:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.361 { 00:05:06.361 "nbd_device": "/dev/nbd0", 00:05:06.361 "bdev_name": "Malloc0" 00:05:06.361 }, 00:05:06.361 { 00:05:06.361 "nbd_device": "/dev/nbd1", 00:05:06.361 "bdev_name": "Malloc1" 00:05:06.361 } 00:05:06.361 ]' 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.361 { 00:05:06.361 "nbd_device": "/dev/nbd0", 00:05:06.361 "bdev_name": "Malloc0" 00:05:06.361 }, 00:05:06.361 { 00:05:06.361 "nbd_device": "/dev/nbd1", 00:05:06.361 "bdev_name": "Malloc1" 00:05:06.361 } 00:05:06.361 ]' 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.361 /dev/nbd1' 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.361 /dev/nbd1' 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.361 256+0 records in 00:05:06.361 256+0 records out 00:05:06.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530569 s, 198 MB/s 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.361 256+0 records in 00:05:06.361 256+0 records out 00:05:06.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210953 s, 49.7 MB/s 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.361 256+0 records in 00:05:06.361 256+0 records out 00:05:06.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250435 s, 41.9 MB/s 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.361 00:20:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.619 00:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.619 00:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.619 00:20:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.619 00:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.619 00:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.619 00:20:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.619 00:20:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.619 00:20:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.619 00:20:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.620 00:20:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.878 00:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.878 00:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.878 00:20:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.878 00:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.878 00:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.878 00:20:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.878 00:20:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.878 00:20:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.878 00:20:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.878 00:20:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.878 00:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.135 00:20:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.135 00:20:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.135 00:20:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.135 00:20:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.136 00:20:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.136 00:20:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.136 00:20:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.136 00:20:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.136 00:20:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.136 00:20:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.136 00:20:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.136 00:20:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.136 00:20:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.393 00:20:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.957 [2024-05-15 00:20:33.825771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.957 [2024-05-15 00:20:33.943795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.957 [2024-05-15 00:20:33.943795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.957 [2024-05-15 00:20:34.000505] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.957 [2024-05-15 00:20:34.000576] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.480 00:20:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.480 00:20:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:10.480 spdk_app_start Round 1 00:05:10.480 00:20:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 752249 /var/tmp/spdk-nbd.sock 00:05:10.480 00:20:36 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 752249 ']' 00:05:10.480 00:20:36 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.480 00:20:36 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:10.480 00:20:36 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.480 00:20:36 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:10.480 00:20:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.737 00:20:36 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:10.737 00:20:36 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:10.737 00:20:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.995 Malloc0 00:05:10.995 00:20:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.253 Malloc1 00:05:11.253 00:20:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.253 00:20:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.510 /dev/nbd0 00:05:11.510 00:20:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.510 00:20:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.510 1+0 records in 00:05:11.510 1+0 records out 00:05:11.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000166685 s, 24.6 MB/s 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:11.510 00:20:37 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:11.510 00:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.511 00:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.511 00:20:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.768 /dev/nbd1 00:05:11.768 00:20:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.768 00:20:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.768 1+0 records in 00:05:11.768 1+0 records out 00:05:11.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195401 s, 21.0 MB/s 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:11.768 00:20:37 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:11.768 00:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.768 00:20:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.768 00:20:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.768 00:20:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.768 00:20:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.026 { 00:05:12.026 "nbd_device": "/dev/nbd0", 00:05:12.026 "bdev_name": "Malloc0" 00:05:12.026 }, 00:05:12.026 { 00:05:12.026 "nbd_device": "/dev/nbd1", 00:05:12.026 "bdev_name": "Malloc1" 00:05:12.026 } 00:05:12.026 ]' 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.026 { 00:05:12.026 "nbd_device": "/dev/nbd0", 00:05:12.026 "bdev_name": "Malloc0" 00:05:12.026 }, 00:05:12.026 { 00:05:12.026 "nbd_device": "/dev/nbd1", 00:05:12.026 "bdev_name": "Malloc1" 00:05:12.026 } 00:05:12.026 ]' 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.026 /dev/nbd1' 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.026 /dev/nbd1' 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.026 256+0 records in 00:05:12.026 256+0 records out 00:05:12.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476388 s, 220 MB/s 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.026 256+0 records in 00:05:12.026 256+0 records out 00:05:12.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215119 s, 48.7 MB/s 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.026 256+0 records in 00:05:12.026 256+0 records out 00:05:12.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229868 s, 45.6 MB/s 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.026 00:20:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.283 00:20:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.283 00:20:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.283 00:20:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.283 00:20:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.283 00:20:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.283 00:20:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.283 00:20:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.283 00:20:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.283 00:20:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.283 00:20:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.283 00:20:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.284 00:20:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.542 00:20:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.800 00:20:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.800 00:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.800 00:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.065 00:20:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.065 00:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.065 00:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.065 00:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.065 00:20:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.065 00:20:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.065 00:20:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.065 00:20:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.065 00:20:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.065 00:20:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.388 00:20:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.388 [2024-05-15 00:20:39.536728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.646 [2024-05-15 00:20:39.651661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.646 [2024-05-15 00:20:39.651665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.646 [2024-05-15 00:20:39.714468] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.646 [2024-05-15 00:20:39.714548] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.173 00:20:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.173 00:20:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:16.173 spdk_app_start Round 2 00:05:16.173 00:20:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 752249 /var/tmp/spdk-nbd.sock 00:05:16.173 00:20:42 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 752249 ']' 00:05:16.173 00:20:42 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.173 00:20:42 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:16.173 00:20:42 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.173 00:20:42 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:16.173 00:20:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.431 00:20:42 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:16.431 00:20:42 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:16.431 00:20:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.688 Malloc0 00:05:16.688 00:20:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.946 Malloc1 00:05:16.946 00:20:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.946 00:20:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.204 /dev/nbd0 00:05:17.204 00:20:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.204 00:20:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.204 1+0 records in 00:05:17.204 1+0 records out 00:05:17.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000150281 s, 27.3 MB/s 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:17.204 00:20:43 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:17.204 00:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.204 00:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.204 00:20:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.462 /dev/nbd1 00:05:17.462 00:20:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.462 00:20:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.462 1+0 records in 00:05:17.462 1+0 records out 00:05:17.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147291 s, 27.8 MB/s 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.462 00:20:43 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:17.463 00:20:43 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:17.463 00:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.463 00:20:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.463 00:20:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.463 00:20:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.463 00:20:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.720 { 00:05:17.720 "nbd_device": "/dev/nbd0", 00:05:17.720 "bdev_name": "Malloc0" 00:05:17.720 }, 00:05:17.720 { 00:05:17.720 "nbd_device": "/dev/nbd1", 00:05:17.720 "bdev_name": "Malloc1" 00:05:17.720 } 00:05:17.720 ]' 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.720 { 00:05:17.720 "nbd_device": "/dev/nbd0", 00:05:17.720 "bdev_name": "Malloc0" 00:05:17.720 }, 00:05:17.720 { 00:05:17.720 "nbd_device": "/dev/nbd1", 00:05:17.720 "bdev_name": "Malloc1" 00:05:17.720 } 00:05:17.720 ]' 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.720 /dev/nbd1' 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.720 /dev/nbd1' 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.720 256+0 records in 00:05:17.720 256+0 records out 00:05:17.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00397348 s, 264 MB/s 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.720 256+0 records in 00:05:17.720 256+0 records out 00:05:17.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213839 s, 49.0 MB/s 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.720 00:20:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.978 256+0 records in 00:05:17.978 256+0 records out 00:05:17.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229276 s, 45.7 MB/s 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.978 00:20:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.236 00:20:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.236 00:20:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.236 00:20:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.236 00:20:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.236 00:20:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.236 00:20:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.236 00:20:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.236 00:20:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.236 00:20:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.236 00:20:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.236 00:20:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.493 00:20:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.493 00:20:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.493 00:20:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.493 00:20:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.493 00:20:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.493 00:20:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.493 00:20:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.493 00:20:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.493 00:20:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.493 00:20:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.750 00:20:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.750 00:20:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.750 00:20:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.750 00:20:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.750 00:20:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.750 00:20:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.751 00:20:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.751 00:20:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.751 00:20:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.751 00:20:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.751 00:20:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.751 00:20:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.751 00:20:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.009 00:20:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.268 [2024-05-15 00:20:45.243947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.268 [2024-05-15 00:20:45.358597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.268 [2024-05-15 00:20:45.358597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.268 [2024-05-15 00:20:45.421565] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.268 [2024-05-15 00:20:45.421665] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.550 00:20:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 752249 /var/tmp/spdk-nbd.sock 00:05:22.550 00:20:47 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 752249 ']' 00:05:22.550 00:20:47 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.550 00:20:47 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:22.550 00:20:47 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.550 00:20:47 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:22.550 00:20:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:22.550 00:20:48 event.app_repeat -- event/event.sh@39 -- # killprocess 752249 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 752249 ']' 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 752249 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 752249 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 752249' 00:05:22.550 killing process with pid 752249 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@966 -- # kill 752249 00:05:22.550 00:20:48 event.app_repeat -- common/autotest_common.sh@971 -- # wait 752249 00:05:22.550 spdk_app_start is called in Round 0. 00:05:22.550 Shutdown signal received, stop current app iteration 00:05:22.550 Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 reinitialization... 00:05:22.550 spdk_app_start is called in Round 1. 00:05:22.550 Shutdown signal received, stop current app iteration 00:05:22.550 Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 reinitialization... 00:05:22.550 spdk_app_start is called in Round 2. 00:05:22.550 Shutdown signal received, stop current app iteration 00:05:22.551 Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 reinitialization... 00:05:22.551 spdk_app_start is called in Round 3. 00:05:22.551 Shutdown signal received, stop current app iteration 00:05:22.551 00:20:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:22.551 00:20:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:22.551 00:05:22.551 real 0m17.778s 00:05:22.551 user 0m38.736s 00:05:22.551 sys 0m3.310s 00:05:22.551 00:20:48 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:22.551 00:20:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.551 ************************************ 00:05:22.551 END TEST app_repeat 00:05:22.551 ************************************ 00:05:22.551 00:20:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:22.551 00:20:48 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:22.551 00:20:48 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:22.551 00:20:48 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:22.551 00:20:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.551 ************************************ 00:05:22.551 START TEST cpu_locks 00:05:22.551 ************************************ 00:05:22.551 00:20:48 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:22.551 * Looking for test storage... 00:05:22.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:22.551 00:20:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:22.551 00:20:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:22.551 00:20:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:22.551 00:20:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:22.551 00:20:48 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:22.551 00:20:48 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:22.551 00:20:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.551 ************************************ 00:05:22.551 START TEST default_locks 00:05:22.551 ************************************ 00:05:22.551 00:20:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:05:22.551 00:20:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=754596 00:05:22.551 00:20:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.551 00:20:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 754596 00:05:22.551 00:20:48 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 754596 ']' 00:05:22.551 00:20:48 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.551 00:20:48 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:22.551 00:20:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.551 00:20:48 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:22.551 00:20:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.551 [2024-05-15 00:20:48.667557] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:22.551 [2024-05-15 00:20:48.667644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754596 ] 00:05:22.551 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.810 [2024-05-15 00:20:48.737678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.810 [2024-05-15 00:20:48.849622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.068 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:23.068 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:05:23.068 00:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 754596 00:05:23.068 00:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 754596 00:05:23.068 00:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.326 lslocks: write error 00:05:23.326 00:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 754596 00:05:23.326 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 754596 ']' 00:05:23.326 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 754596 00:05:23.326 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:05:23.326 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:23.326 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 754596 00:05:23.584 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:23.584 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:23.584 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 754596' 00:05:23.584 killing process with pid 754596 00:05:23.584 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 754596 00:05:23.584 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 754596 00:05:23.842 00:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 754596 00:05:23.842 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:23.842 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 754596 00:05:23.842 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:23.842 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:23.842 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:23.842 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:23.842 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 754596 00:05:23.842 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 754596 ']' 00:05:23.842 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (754596) - No such process 00:05:23.843 ERROR: process (pid: 754596) is no longer running 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.843 00:05:23.843 real 0m1.328s 00:05:23.843 user 0m1.242s 00:05:23.843 sys 0m0.566s 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:23.843 00:20:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.843 ************************************ 00:05:23.843 END TEST default_locks 00:05:23.843 ************************************ 00:05:23.843 00:20:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:23.843 00:20:49 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:23.843 00:20:49 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:23.843 00:20:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.843 ************************************ 00:05:23.843 START TEST default_locks_via_rpc 00:05:23.843 ************************************ 00:05:23.843 00:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:05:23.843 00:20:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=754768 00:05:23.843 00:20:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.843 00:20:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 754768 00:05:23.843 00:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 754768 ']' 00:05:23.843 00:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.843 00:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:23.843 00:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.101 00:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:24.101 00:20:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.101 [2024-05-15 00:20:50.055884] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:24.101 [2024-05-15 00:20:50.056000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754768 ] 00:05:24.101 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.101 [2024-05-15 00:20:50.132106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.101 [2024-05-15 00:20:50.254835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 754768 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 754768 00:05:25.034 00:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.292 00:20:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 754768 00:05:25.292 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 754768 ']' 00:05:25.292 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 754768 00:05:25.292 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:05:25.292 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:25.292 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 754768 00:05:25.292 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:25.292 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:25.292 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 754768' 00:05:25.292 killing process with pid 754768 00:05:25.292 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 754768 00:05:25.292 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 754768 00:05:25.857 00:05:25.857 real 0m1.835s 00:05:25.857 user 0m1.960s 00:05:25.857 sys 0m0.598s 00:05:25.857 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:25.857 00:20:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.857 ************************************ 00:05:25.857 END TEST default_locks_via_rpc 00:05:25.857 ************************************ 00:05:25.857 00:20:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:25.857 00:20:51 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:25.857 00:20:51 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:25.857 00:20:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.857 ************************************ 00:05:25.857 START TEST non_locking_app_on_locked_coremask 00:05:25.857 ************************************ 00:05:25.857 00:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:05:25.857 00:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=755047 00:05:25.857 00:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.857 00:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 755047 /var/tmp/spdk.sock 00:05:25.857 00:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 755047 ']' 00:05:25.857 00:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.857 00:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:25.857 00:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.857 00:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:25.857 00:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.857 [2024-05-15 00:20:51.943832] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:25.857 [2024-05-15 00:20:51.943937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755047 ] 00:05:25.857 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.857 [2024-05-15 00:20:52.015448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.116 [2024-05-15 00:20:52.128940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.374 00:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:26.374 00:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:26.374 00:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=755066 00:05:26.374 00:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:26.374 00:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 755066 /var/tmp/spdk2.sock 00:05:26.374 00:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 755066 ']' 00:05:26.374 00:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.374 00:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:26.374 00:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.374 00:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:26.374 00:20:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.374 [2024-05-15 00:20:52.438622] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:26.374 [2024-05-15 00:20:52.438695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755066 ] 00:05:26.374 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.632 [2024-05-15 00:20:52.551462] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.632 [2024-05-15 00:20:52.551497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.632 [2024-05-15 00:20:52.789358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.565 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:27.565 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:27.565 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 755047 00:05:27.565 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 755047 00:05:27.565 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.822 lslocks: write error 00:05:27.822 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 755047 00:05:27.822 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 755047 ']' 00:05:27.822 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 755047 00:05:27.822 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:27.822 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:27.822 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 755047 00:05:27.822 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:27.822 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:27.822 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 755047' 00:05:27.822 killing process with pid 755047 00:05:27.822 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 755047 00:05:27.822 00:20:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 755047 00:05:28.783 00:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 755066 00:05:28.783 00:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 755066 ']' 00:05:28.783 00:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 755066 00:05:28.783 00:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:28.783 00:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:28.783 00:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 755066 00:05:28.783 00:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:28.783 00:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:28.783 00:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 755066' 00:05:28.783 killing process with pid 755066 00:05:28.783 00:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 755066 00:05:28.783 00:20:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 755066 00:05:29.042 00:05:29.042 real 0m3.313s 00:05:29.042 user 0m3.402s 00:05:29.042 sys 0m1.096s 00:05:29.042 00:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:29.042 00:20:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.042 ************************************ 00:05:29.042 END TEST non_locking_app_on_locked_coremask 00:05:29.042 ************************************ 00:05:29.299 00:20:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:29.299 00:20:55 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:29.299 00:20:55 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:29.299 00:20:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.299 ************************************ 00:05:29.299 START TEST locking_app_on_unlocked_coremask 00:05:29.299 ************************************ 00:05:29.299 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:05:29.299 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=755490 00:05:29.299 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:29.299 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 755490 /var/tmp/spdk.sock 00:05:29.299 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 755490 ']' 00:05:29.300 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.300 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:29.300 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.300 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:29.300 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.300 [2024-05-15 00:20:55.303324] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:29.300 [2024-05-15 00:20:55.303402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755490 ] 00:05:29.300 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.300 [2024-05-15 00:20:55.371325] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.300 [2024-05-15 00:20:55.371363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.558 [2024-05-15 00:20:55.480014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.817 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:29.817 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:29.817 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=755503 00:05:29.817 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.817 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 755503 /var/tmp/spdk2.sock 00:05:29.817 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 755503 ']' 00:05:29.817 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.817 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:29.817 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.817 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:29.817 00:20:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.817 [2024-05-15 00:20:55.783547] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:29.817 [2024-05-15 00:20:55.783631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755503 ] 00:05:29.817 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.817 [2024-05-15 00:20:55.895756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.076 [2024-05-15 00:20:56.134734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.640 00:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:30.640 00:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:30.640 00:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 755503 00:05:30.640 00:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 755503 00:05:30.640 00:20:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.204 lslocks: write error 00:05:31.204 00:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 755490 00:05:31.204 00:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 755490 ']' 00:05:31.204 00:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 755490 00:05:31.204 00:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:31.204 00:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:31.204 00:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 755490 00:05:31.204 00:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:31.204 00:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:31.204 00:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 755490' 00:05:31.204 killing process with pid 755490 00:05:31.204 00:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 755490 00:05:31.204 00:20:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 755490 00:05:32.136 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 755503 00:05:32.136 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 755503 ']' 00:05:32.136 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 755503 00:05:32.136 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:32.136 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:32.136 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 755503 00:05:32.136 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:32.136 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:32.136 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 755503' 00:05:32.136 killing process with pid 755503 00:05:32.136 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 755503 00:05:32.136 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 755503 00:05:32.703 00:05:32.703 real 0m3.338s 00:05:32.703 user 0m3.485s 00:05:32.703 sys 0m1.047s 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.703 ************************************ 00:05:32.703 END TEST locking_app_on_unlocked_coremask 00:05:32.703 ************************************ 00:05:32.703 00:20:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:32.703 00:20:58 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:32.703 00:20:58 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:32.703 00:20:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.703 ************************************ 00:05:32.703 START TEST locking_app_on_locked_coremask 00:05:32.703 ************************************ 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=755924 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 755924 /var/tmp/spdk.sock 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 755924 ']' 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:32.703 00:20:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.703 [2024-05-15 00:20:58.700280] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:32.703 [2024-05-15 00:20:58.700361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755924 ] 00:05:32.703 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.703 [2024-05-15 00:20:58.773363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.961 [2024-05-15 00:20:58.892896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=755970 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 755970 /var/tmp/spdk2.sock 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 755970 /var/tmp/spdk2.sock 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 755970 /var/tmp/spdk2.sock 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 755970 ']' 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:33.218 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.218 [2024-05-15 00:20:59.204616] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:33.218 [2024-05-15 00:20:59.204693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755970 ] 00:05:33.218 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.218 [2024-05-15 00:20:59.318415] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 755924 has claimed it. 00:05:33.218 [2024-05-15 00:20:59.318473] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:33.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (755970) - No such process 00:05:33.784 ERROR: process (pid: 755970) is no longer running 00:05:33.784 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:33.784 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:33.784 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:33.784 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:33.784 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:33.784 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:33.784 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 755924 00:05:33.784 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 755924 00:05:33.784 00:20:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.042 lslocks: write error 00:05:34.042 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 755924 00:05:34.042 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 755924 ']' 00:05:34.042 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 755924 00:05:34.042 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:34.042 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:34.042 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 755924 00:05:34.300 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:34.300 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:34.300 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 755924' 00:05:34.300 killing process with pid 755924 00:05:34.300 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 755924 00:05:34.300 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 755924 00:05:34.557 00:05:34.557 real 0m1.995s 00:05:34.557 user 0m2.133s 00:05:34.557 sys 0m0.665s 00:05:34.557 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:34.557 00:21:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.557 ************************************ 00:05:34.557 END TEST locking_app_on_locked_coremask 00:05:34.557 ************************************ 00:05:34.557 00:21:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:34.557 00:21:00 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:34.557 00:21:00 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:34.557 00:21:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.557 ************************************ 00:05:34.557 START TEST locking_overlapped_coremask 00:05:34.557 ************************************ 00:05:34.557 00:21:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:05:34.557 00:21:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=756270 00:05:34.557 00:21:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:34.557 00:21:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 756270 /var/tmp/spdk.sock 00:05:34.557 00:21:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 756270 ']' 00:05:34.557 00:21:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.557 00:21:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:34.557 00:21:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.557 00:21:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:34.557 00:21:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.815 [2024-05-15 00:21:00.741542] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:34.815 [2024-05-15 00:21:00.741616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756270 ] 00:05:34.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.815 [2024-05-15 00:21:00.818355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.815 [2024-05-15 00:21:00.938963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.815 [2024-05-15 00:21:00.939008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.815 [2024-05-15 00:21:00.939012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.073 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=756320 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 756320 /var/tmp/spdk2.sock 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 756320 /var/tmp/spdk2.sock 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 756320 /var/tmp/spdk2.sock 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 756320 ']' 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:35.074 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.332 [2024-05-15 00:21:01.242939] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:35.332 [2024-05-15 00:21:01.243044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756320 ] 00:05:35.332 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.332 [2024-05-15 00:21:01.348889] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 756270 has claimed it. 00:05:35.332 [2024-05-15 00:21:01.348990] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:35.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (756320) - No such process 00:05:35.899 ERROR: process (pid: 756320) is no longer running 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 756270 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 756270 ']' 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 756270 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 756270 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 756270' 00:05:35.899 killing process with pid 756270 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 756270 00:05:35.899 00:21:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 756270 00:05:36.465 00:05:36.465 real 0m1.741s 00:05:36.465 user 0m4.594s 00:05:36.465 sys 0m0.465s 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.465 ************************************ 00:05:36.465 END TEST locking_overlapped_coremask 00:05:36.465 ************************************ 00:05:36.465 00:21:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:36.465 00:21:02 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:36.465 00:21:02 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:36.465 00:21:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.465 ************************************ 00:05:36.465 START TEST locking_overlapped_coremask_via_rpc 00:05:36.465 ************************************ 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=756536 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 756536 /var/tmp/spdk.sock 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 756536 ']' 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:36.465 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.465 [2024-05-15 00:21:02.539160] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:36.465 [2024-05-15 00:21:02.539253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756536 ] 00:05:36.465 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.465 [2024-05-15 00:21:02.615181] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.465 [2024-05-15 00:21:02.615218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.723 [2024-05-15 00:21:02.740957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.723 [2024-05-15 00:21:02.741010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.723 [2024-05-15 00:21:02.741016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.981 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:36.981 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:36.981 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=756638 00:05:36.981 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 756638 /var/tmp/spdk2.sock 00:05:36.981 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 756638 ']' 00:05:36.981 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.981 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:36.981 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.981 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:36.981 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:36.981 00:21:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.981 [2024-05-15 00:21:03.046855] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:36.981 [2024-05-15 00:21:03.046957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756638 ] 00:05:36.981 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.239 [2024-05-15 00:21:03.150661] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.239 [2024-05-15 00:21:03.150695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.239 [2024-05-15 00:21:03.378238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.239 [2024-05-15 00:21:03.378303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:37.239 [2024-05-15 00:21:03.378305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.170 00:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:38.170 00:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:38.170 00:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.170 00:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:38.170 00:21:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.170 [2024-05-15 00:21:04.014025] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 756536 has claimed it. 00:05:38.170 request: 00:05:38.170 { 00:05:38.170 "method": "framework_enable_cpumask_locks", 00:05:38.170 "req_id": 1 00:05:38.170 } 00:05:38.170 Got JSON-RPC error response 00:05:38.170 response: 00:05:38.170 { 00:05:38.170 "code": -32603, 00:05:38.170 "message": "Failed to claim CPU core: 2" 00:05:38.170 } 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 756536 /var/tmp/spdk.sock 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 756536 ']' 00:05:38.170 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 756638 /var/tmp/spdk2.sock 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 756638 ']' 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:38.171 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.428 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:38.428 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:38.428 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:38.428 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.428 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.428 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.428 00:05:38.428 real 0m2.037s 00:05:38.428 user 0m1.050s 00:05:38.428 sys 0m0.188s 00:05:38.428 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:38.428 00:21:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.428 ************************************ 00:05:38.428 END TEST locking_overlapped_coremask_via_rpc 00:05:38.428 ************************************ 00:05:38.428 00:21:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:38.428 00:21:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 756536 ]] 00:05:38.428 00:21:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 756536 00:05:38.428 00:21:04 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 756536 ']' 00:05:38.428 00:21:04 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 756536 00:05:38.428 00:21:04 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:38.428 00:21:04 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:38.428 00:21:04 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 756536 00:05:38.428 00:21:04 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:38.428 00:21:04 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:38.428 00:21:04 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 756536' 00:05:38.428 killing process with pid 756536 00:05:38.428 00:21:04 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 756536 00:05:38.428 00:21:04 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 756536 00:05:38.994 00:21:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 756638 ]] 00:05:38.994 00:21:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 756638 00:05:38.994 00:21:05 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 756638 ']' 00:05:38.994 00:21:05 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 756638 00:05:38.994 00:21:05 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:38.994 00:21:05 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:38.994 00:21:05 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 756638 00:05:38.994 00:21:05 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:05:38.994 00:21:05 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:05:38.994 00:21:05 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 756638' 00:05:38.994 killing process with pid 756638 00:05:38.994 00:21:05 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 756638 00:05:38.994 00:21:05 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 756638 00:05:39.560 00:21:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:39.560 00:21:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:39.560 00:21:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 756536 ]] 00:05:39.560 00:21:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 756536 00:05:39.560 00:21:05 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 756536 ']' 00:05:39.560 00:21:05 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 756536 00:05:39.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (756536) - No such process 00:05:39.560 00:21:05 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 756536 is not found' 00:05:39.560 Process with pid 756536 is not found 00:05:39.560 00:21:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 756638 ]] 00:05:39.560 00:21:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 756638 00:05:39.560 00:21:05 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 756638 ']' 00:05:39.560 00:21:05 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 756638 00:05:39.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (756638) - No such process 00:05:39.560 00:21:05 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 756638 is not found' 00:05:39.560 Process with pid 756638 is not found 00:05:39.560 00:21:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:39.560 00:05:39.560 real 0m16.952s 00:05:39.560 user 0m28.813s 00:05:39.560 sys 0m5.535s 00:05:39.560 00:21:05 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:39.560 00:21:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.560 ************************************ 00:05:39.560 END TEST cpu_locks 00:05:39.560 ************************************ 00:05:39.560 00:05:39.560 real 0m43.503s 00:05:39.560 user 1m22.002s 00:05:39.560 sys 0m9.691s 00:05:39.560 00:21:05 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:39.560 00:21:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.560 ************************************ 00:05:39.560 END TEST event 00:05:39.560 ************************************ 00:05:39.560 00:21:05 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:39.560 00:21:05 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:39.560 00:21:05 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:39.560 00:21:05 -- common/autotest_common.sh@10 -- # set +x 00:05:39.560 ************************************ 00:05:39.560 START TEST thread 00:05:39.560 ************************************ 00:05:39.560 00:21:05 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:39.560 * Looking for test storage... 00:05:39.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:39.560 00:21:05 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:39.560 00:21:05 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:39.560 00:21:05 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:39.560 00:21:05 thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.560 ************************************ 00:05:39.560 START TEST thread_poller_perf 00:05:39.560 ************************************ 00:05:39.560 00:21:05 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:39.561 [2024-05-15 00:21:05.661039] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:39.561 [2024-05-15 00:21:05.661105] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757009 ] 00:05:39.561 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.819 [2024-05-15 00:21:05.733842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.819 [2024-05-15 00:21:05.846639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.819 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:41.192 ====================================== 00:05:41.192 busy:2714906038 (cyc) 00:05:41.192 total_run_count: 294000 00:05:41.192 tsc_hz: 2700000000 (cyc) 00:05:41.192 ====================================== 00:05:41.192 poller_cost: 9234 (cyc), 3420 (nsec) 00:05:41.192 00:05:41.192 real 0m1.323s 00:05:41.192 user 0m1.224s 00:05:41.192 sys 0m0.092s 00:05:41.192 00:21:06 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:41.192 00:21:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.192 ************************************ 00:05:41.192 END TEST thread_poller_perf 00:05:41.192 ************************************ 00:05:41.192 00:21:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.192 00:21:06 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:41.192 00:21:06 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:41.192 00:21:06 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.192 ************************************ 00:05:41.192 START TEST thread_poller_perf 00:05:41.192 ************************************ 00:05:41.192 00:21:07 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.192 [2024-05-15 00:21:07.032050] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:41.192 [2024-05-15 00:21:07.032108] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757179 ] 00:05:41.192 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.192 [2024-05-15 00:21:07.105581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.192 [2024-05-15 00:21:07.225670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.192 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:42.605 ====================================== 00:05:42.605 busy:2702944265 (cyc) 00:05:42.605 total_run_count: 3862000 00:05:42.605 tsc_hz: 2700000000 (cyc) 00:05:42.605 ====================================== 00:05:42.605 poller_cost: 699 (cyc), 258 (nsec) 00:05:42.605 00:05:42.605 real 0m1.329s 00:05:42.605 user 0m1.239s 00:05:42.605 sys 0m0.082s 00:05:42.605 00:21:08 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:42.605 00:21:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.605 ************************************ 00:05:42.605 END TEST thread_poller_perf 00:05:42.605 ************************************ 00:05:42.605 00:21:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:42.605 00:05:42.605 real 0m2.802s 00:05:42.605 user 0m2.519s 00:05:42.605 sys 0m0.276s 00:05:42.605 00:21:08 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:42.605 00:21:08 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.605 ************************************ 00:05:42.605 END TEST thread 00:05:42.605 ************************************ 00:05:42.605 00:21:08 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:42.605 00:21:08 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:42.605 00:21:08 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:42.605 00:21:08 -- common/autotest_common.sh@10 -- # set +x 00:05:42.605 ************************************ 00:05:42.605 START TEST accel 00:05:42.605 ************************************ 00:05:42.605 00:21:08 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:42.605 * Looking for test storage... 00:05:42.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:42.605 00:21:08 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:42.605 00:21:08 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:42.605 00:21:08 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:42.605 00:21:08 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=757905 00:05:42.605 00:21:08 accel -- accel/accel.sh@63 -- # waitforlisten 757905 00:05:42.605 00:21:08 accel -- common/autotest_common.sh@828 -- # '[' -z 757905 ']' 00:05:42.605 00:21:08 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:42.605 00:21:08 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.605 00:21:08 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:42.605 00:21:08 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:42.605 00:21:08 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.605 00:21:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.605 00:21:08 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:42.605 00:21:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.605 00:21:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.605 00:21:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.605 00:21:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.605 00:21:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.605 00:21:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:42.605 00:21:08 accel -- accel/accel.sh@41 -- # jq -r . 00:05:42.605 [2024-05-15 00:21:08.520733] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:42.605 [2024-05-15 00:21:08.520812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757905 ] 00:05:42.605 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.606 [2024-05-15 00:21:08.595142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.606 [2024-05-15 00:21:08.713827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.863 00:21:08 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:42.863 00:21:08 accel -- common/autotest_common.sh@861 -- # return 0 00:05:42.863 00:21:08 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:42.863 00:21:08 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:42.863 00:21:08 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:42.863 00:21:08 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:42.863 00:21:08 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:42.863 00:21:08 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:42.863 00:21:08 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:42.863 00:21:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.863 00:21:08 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:42.864 00:21:08 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:42.864 00:21:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.864 00:21:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.864 00:21:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.864 00:21:09 accel -- accel/accel.sh@75 -- # killprocess 757905 00:05:42.864 00:21:09 accel -- common/autotest_common.sh@947 -- # '[' -z 757905 ']' 00:05:42.864 00:21:09 accel -- common/autotest_common.sh@951 -- # kill -0 757905 00:05:42.864 00:21:09 accel -- common/autotest_common.sh@952 -- # uname 00:05:42.864 00:21:09 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:42.864 00:21:09 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 757905 00:05:43.122 00:21:09 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:43.122 00:21:09 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:43.122 00:21:09 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 757905' 00:05:43.122 killing process with pid 757905 00:05:43.122 00:21:09 accel -- common/autotest_common.sh@966 -- # kill 757905 00:05:43.122 00:21:09 accel -- common/autotest_common.sh@971 -- # wait 757905 00:05:43.379 00:21:09 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:43.379 00:21:09 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:43.379 00:21:09 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:05:43.379 00:21:09 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:43.379 00:21:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.379 00:21:09 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:05:43.379 00:21:09 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:43.379 00:21:09 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:43.379 00:21:09 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.379 00:21:09 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.379 00:21:09 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.379 00:21:09 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.379 00:21:09 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.379 00:21:09 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:43.379 00:21:09 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:43.379 00:21:09 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:43.379 00:21:09 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:43.379 00:21:09 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:43.379 00:21:09 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:43.379 00:21:09 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:43.379 00:21:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.637 ************************************ 00:05:43.637 START TEST accel_missing_filename 00:05:43.637 ************************************ 00:05:43.637 00:21:09 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:05:43.637 00:21:09 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:05:43.637 00:21:09 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:43.637 00:21:09 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:43.637 00:21:09 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:43.637 00:21:09 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:43.637 00:21:09 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:43.637 00:21:09 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:05:43.637 00:21:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:43.637 00:21:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:43.637 00:21:09 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.637 00:21:09 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.637 00:21:09 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.637 00:21:09 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.637 00:21:09 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.637 00:21:09 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:43.637 00:21:09 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:43.637 [2024-05-15 00:21:09.584269] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:43.637 [2024-05-15 00:21:09.584339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758162 ] 00:05:43.637 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.637 [2024-05-15 00:21:09.659389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.637 [2024-05-15 00:21:09.778114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.896 [2024-05-15 00:21:09.840824] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.896 [2024-05-15 00:21:09.926419] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:43.896 A filename is required. 00:05:43.896 00:21:10 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:05:43.896 00:21:10 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:43.896 00:21:10 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:05:43.896 00:21:10 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:05:43.896 00:21:10 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:05:43.896 00:21:10 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:43.896 00:05:43.896 real 0m0.487s 00:05:43.896 user 0m0.357s 00:05:43.896 sys 0m0.164s 00:05:43.896 00:21:10 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:43.896 00:21:10 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:43.896 ************************************ 00:05:43.896 END TEST accel_missing_filename 00:05:43.896 ************************************ 00:05:44.153 00:21:10 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.153 00:21:10 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:44.153 00:21:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:44.153 00:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.153 ************************************ 00:05:44.153 START TEST accel_compress_verify 00:05:44.153 ************************************ 00:05:44.153 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.153 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:05:44.153 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.153 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:44.153 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:44.153 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:44.153 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:44.154 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.154 00:21:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.154 00:21:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:44.154 00:21:10 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.154 00:21:10 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.154 00:21:10 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.154 00:21:10 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.154 00:21:10 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.154 00:21:10 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:44.154 00:21:10 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:44.154 [2024-05-15 00:21:10.128552] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:44.154 [2024-05-15 00:21:10.128616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758183 ] 00:05:44.154 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.154 [2024-05-15 00:21:10.204236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.411 [2024-05-15 00:21:10.325665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.411 [2024-05-15 00:21:10.383354] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.411 [2024-05-15 00:21:10.461024] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:44.672 00:05:44.672 Compression does not support the verify option, aborting. 00:05:44.672 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:05:44.672 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:44.672 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:05:44.672 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:05:44.672 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:05:44.672 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:44.672 00:05:44.672 real 0m0.476s 00:05:44.672 user 0m0.352s 00:05:44.672 sys 0m0.157s 00:05:44.672 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:44.672 00:21:10 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:44.672 ************************************ 00:05:44.672 END TEST accel_compress_verify 00:05:44.672 ************************************ 00:05:44.672 00:21:10 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:44.672 00:21:10 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:44.672 00:21:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:44.672 00:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.672 ************************************ 00:05:44.672 START TEST accel_wrong_workload 00:05:44.672 ************************************ 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:05:44.672 00:21:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:44.672 00:21:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:44.672 00:21:10 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.672 00:21:10 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.672 00:21:10 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.672 00:21:10 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.672 00:21:10 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.672 00:21:10 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:44.672 00:21:10 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:44.672 Unsupported workload type: foobar 00:05:44.672 [2024-05-15 00:21:10.654766] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:44.672 accel_perf options: 00:05:44.672 [-h help message] 00:05:44.672 [-q queue depth per core] 00:05:44.672 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:44.672 [-T number of threads per core 00:05:44.672 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:44.672 [-t time in seconds] 00:05:44.672 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:44.672 [ dif_verify, , dif_generate, dif_generate_copy 00:05:44.672 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:44.672 [-l for compress/decompress workloads, name of uncompressed input file 00:05:44.672 [-S for crc32c workload, use this seed value (default 0) 00:05:44.672 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:44.672 [-f for fill workload, use this BYTE value (default 255) 00:05:44.672 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:44.672 [-y verify result if this switch is on] 00:05:44.672 [-a tasks to allocate per core (default: same value as -q)] 00:05:44.672 Can be used to spread operations across a wider range of memory. 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:44.672 00:05:44.672 real 0m0.023s 00:05:44.672 user 0m0.014s 00:05:44.672 sys 0m0.010s 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:44.672 00:21:10 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:44.672 ************************************ 00:05:44.672 END TEST accel_wrong_workload 00:05:44.672 ************************************ 00:05:44.672 Error: writing output failed: Broken pipe 00:05:44.672 00:21:10 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:44.672 00:21:10 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:44.672 00:21:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:44.672 00:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.673 ************************************ 00:05:44.673 START TEST accel_negative_buffers 00:05:44.673 ************************************ 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:05:44.673 00:21:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:44.673 00:21:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:44.673 00:21:10 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.673 00:21:10 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.673 00:21:10 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.673 00:21:10 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.673 00:21:10 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.673 00:21:10 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:44.673 00:21:10 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:44.673 -x option must be non-negative. 00:05:44.673 [2024-05-15 00:21:10.723311] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:44.673 accel_perf options: 00:05:44.673 [-h help message] 00:05:44.673 [-q queue depth per core] 00:05:44.673 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:44.673 [-T number of threads per core 00:05:44.673 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:44.673 [-t time in seconds] 00:05:44.673 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:44.673 [ dif_verify, , dif_generate, dif_generate_copy 00:05:44.673 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:44.673 [-l for compress/decompress workloads, name of uncompressed input file 00:05:44.673 [-S for crc32c workload, use this seed value (default 0) 00:05:44.673 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:44.673 [-f for fill workload, use this BYTE value (default 255) 00:05:44.673 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:44.673 [-y verify result if this switch is on] 00:05:44.673 [-a tasks to allocate per core (default: same value as -q)] 00:05:44.673 Can be used to spread operations across a wider range of memory. 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:44.673 00:05:44.673 real 0m0.022s 00:05:44.673 user 0m0.012s 00:05:44.673 sys 0m0.010s 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:44.673 00:21:10 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:44.673 ************************************ 00:05:44.673 END TEST accel_negative_buffers 00:05:44.673 ************************************ 00:05:44.673 Error: writing output failed: Broken pipe 00:05:44.673 00:21:10 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:44.673 00:21:10 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:44.673 00:21:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:44.673 00:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.673 ************************************ 00:05:44.673 START TEST accel_crc32c 00:05:44.673 ************************************ 00:05:44.673 00:21:10 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:44.673 00:21:10 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:44.673 [2024-05-15 00:21:10.796910] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:44.673 [2024-05-15 00:21:10.796997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758375 ] 00:05:44.673 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.931 [2024-05-15 00:21:10.871767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.931 [2024-05-15 00:21:10.990419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.931 00:21:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:46.303 00:21:12 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.303 00:05:46.303 real 0m1.489s 00:05:46.303 user 0m1.339s 00:05:46.303 sys 0m0.152s 00:05:46.303 00:21:12 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:46.303 00:21:12 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:46.303 ************************************ 00:05:46.303 END TEST accel_crc32c 00:05:46.303 ************************************ 00:05:46.303 00:21:12 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:46.303 00:21:12 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:46.303 00:21:12 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:46.303 00:21:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.303 ************************************ 00:05:46.303 START TEST accel_crc32c_C2 00:05:46.303 ************************************ 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:46.303 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:46.303 [2024-05-15 00:21:12.336456] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:46.303 [2024-05-15 00:21:12.336517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758532 ] 00:05:46.303 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.303 [2024-05-15 00:21:12.410712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.562 [2024-05-15 00:21:12.530661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.562 00:21:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.938 00:05:47.938 real 0m1.486s 00:05:47.938 user 0m1.336s 00:05:47.938 sys 0m0.152s 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:47.938 00:21:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:47.938 ************************************ 00:05:47.938 END TEST accel_crc32c_C2 00:05:47.938 ************************************ 00:05:47.938 00:21:13 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:47.938 00:21:13 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:47.938 00:21:13 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:47.938 00:21:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.938 ************************************ 00:05:47.938 START TEST accel_copy 00:05:47.938 ************************************ 00:05:47.938 00:21:13 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:47.938 00:21:13 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:47.938 [2024-05-15 00:21:13.869429] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:47.938 [2024-05-15 00:21:13.869493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758757 ] 00:05:47.938 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.938 [2024-05-15 00:21:13.938569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.938 [2024-05-15 00:21:14.056108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.196 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.197 00:21:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.570 00:21:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:49.571 00:21:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.571 00:05:49.571 real 0m1.468s 00:05:49.571 user 0m1.313s 00:05:49.571 sys 0m0.155s 00:05:49.571 00:21:15 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:49.571 00:21:15 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:49.571 ************************************ 00:05:49.571 END TEST accel_copy 00:05:49.571 ************************************ 00:05:49.571 00:21:15 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.571 00:21:15 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:05:49.571 00:21:15 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:49.571 00:21:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.571 ************************************ 00:05:49.571 START TEST accel_fill 00:05:49.571 ************************************ 00:05:49.571 00:21:15 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:49.571 [2024-05-15 00:21:15.387186] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:49.571 [2024-05-15 00:21:15.387269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758967 ] 00:05:49.571 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.571 [2024-05-15 00:21:15.459857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.571 [2024-05-15 00:21:15.578817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.571 00:21:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:50.945 00:21:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.945 00:05:50.945 real 0m1.485s 00:05:50.945 user 0m1.328s 00:05:50.946 sys 0m0.159s 00:05:50.946 00:21:16 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:50.946 00:21:16 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:50.946 ************************************ 00:05:50.946 END TEST accel_fill 00:05:50.946 ************************************ 00:05:50.946 00:21:16 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:50.946 00:21:16 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:50.946 00:21:16 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:50.946 00:21:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.946 ************************************ 00:05:50.946 START TEST accel_copy_crc32c 00:05:50.946 ************************************ 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:50.946 00:21:16 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:50.946 [2024-05-15 00:21:16.921509] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:50.946 [2024-05-15 00:21:16.921573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759120 ] 00:05:50.946 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.946 [2024-05-15 00:21:16.994684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.204 [2024-05-15 00:21:17.113828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.204 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.205 00:21:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.581 00:05:52.581 real 0m1.486s 00:05:52.581 user 0m1.333s 00:05:52.581 sys 0m0.153s 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:52.581 00:21:18 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:52.581 ************************************ 00:05:52.581 END TEST accel_copy_crc32c 00:05:52.581 ************************************ 00:05:52.581 00:21:18 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:52.581 00:21:18 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:52.581 00:21:18 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:52.581 00:21:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.581 ************************************ 00:05:52.581 START TEST accel_copy_crc32c_C2 00:05:52.581 ************************************ 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:52.581 [2024-05-15 00:21:18.457670] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:52.581 [2024-05-15 00:21:18.457732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759396 ] 00:05:52.581 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.581 [2024-05-15 00:21:18.525685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.581 [2024-05-15 00:21:18.641890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.581 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.582 00:21:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.956 00:05:53.956 real 0m1.471s 00:05:53.956 user 0m1.316s 00:05:53.956 sys 0m0.156s 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:53.956 00:21:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:53.956 ************************************ 00:05:53.956 END TEST accel_copy_crc32c_C2 00:05:53.956 ************************************ 00:05:53.956 00:21:19 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:53.956 00:21:19 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:53.956 00:21:19 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:53.956 00:21:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.956 ************************************ 00:05:53.956 START TEST accel_dualcast 00:05:53.956 ************************************ 00:05:53.956 00:21:19 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:53.956 00:21:19 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:53.956 [2024-05-15 00:21:19.977698] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:53.956 [2024-05-15 00:21:19.977761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759556 ] 00:05:53.956 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.956 [2024-05-15 00:21:20.057191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.215 [2024-05-15 00:21:20.178170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.215 00:21:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.589 00:21:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.590 00:21:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.590 00:21:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:55.590 00:21:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.590 00:05:55.590 real 0m1.497s 00:05:55.590 user 0m1.337s 00:05:55.590 sys 0m0.161s 00:05:55.590 00:21:21 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:55.590 00:21:21 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:55.590 ************************************ 00:05:55.590 END TEST accel_dualcast 00:05:55.590 ************************************ 00:05:55.590 00:21:21 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:55.590 00:21:21 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:55.590 00:21:21 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:55.590 00:21:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.590 ************************************ 00:05:55.590 START TEST accel_compare 00:05:55.590 ************************************ 00:05:55.590 00:21:21 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:55.590 00:21:21 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:55.590 [2024-05-15 00:21:21.525617] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:55.590 [2024-05-15 00:21:21.525680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759713 ] 00:05:55.590 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.590 [2024-05-15 00:21:21.599938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.590 [2024-05-15 00:21:21.719013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.849 00:21:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.249 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:57.250 00:21:22 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.250 00:05:57.250 real 0m1.480s 00:05:57.250 user 0m1.323s 00:05:57.250 sys 0m0.159s 00:05:57.250 00:21:22 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:57.250 00:21:22 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:57.250 ************************************ 00:05:57.250 END TEST accel_compare 00:05:57.250 ************************************ 00:05:57.250 00:21:23 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:57.250 00:21:23 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:57.250 00:21:23 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:57.250 00:21:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.250 ************************************ 00:05:57.250 START TEST accel_xor 00:05:57.250 ************************************ 00:05:57.250 00:21:23 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:57.250 [2024-05-15 00:21:23.054636] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:57.250 [2024-05-15 00:21:23.054697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759989 ] 00:05:57.250 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.250 [2024-05-15 00:21:23.124553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.250 [2024-05-15 00:21:23.241787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.250 00:21:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.623 00:05:58.623 real 0m1.481s 00:05:58.623 user 0m1.332s 00:05:58.623 sys 0m0.151s 00:05:58.623 00:21:24 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:58.623 00:21:24 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:58.623 ************************************ 00:05:58.623 END TEST accel_xor 00:05:58.623 ************************************ 00:05:58.623 00:21:24 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:58.623 00:21:24 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:58.623 00:21:24 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:58.623 00:21:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.623 ************************************ 00:05:58.623 START TEST accel_xor 00:05:58.623 ************************************ 00:05:58.623 00:21:24 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:58.623 00:21:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:58.623 [2024-05-15 00:21:24.587504] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:58.623 [2024-05-15 00:21:24.587567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760151 ] 00:05:58.623 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.623 [2024-05-15 00:21:24.660448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.623 [2024-05-15 00:21:24.779090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.882 00:21:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:00.256 00:21:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.256 00:06:00.256 real 0m1.493s 00:06:00.256 user 0m1.335s 00:06:00.256 sys 0m0.160s 00:06:00.256 00:21:26 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:00.256 00:21:26 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:00.256 ************************************ 00:06:00.256 END TEST accel_xor 00:06:00.256 ************************************ 00:06:00.256 00:21:26 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:00.256 00:21:26 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:00.256 00:21:26 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:00.256 00:21:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.256 ************************************ 00:06:00.256 START TEST accel_dif_verify 00:06:00.256 ************************************ 00:06:00.256 00:21:26 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:06:00.256 00:21:26 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:00.256 00:21:26 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:00.256 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:00.257 [2024-05-15 00:21:26.133802] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:00.257 [2024-05-15 00:21:26.133870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760304 ] 00:06:00.257 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.257 [2024-05-15 00:21:26.207704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.257 [2024-05-15 00:21:26.330466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.257 00:21:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:01.634 00:21:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.634 00:06:01.634 real 0m1.489s 00:06:01.634 user 0m1.340s 00:06:01.634 sys 0m0.152s 00:06:01.634 00:21:27 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:01.634 00:21:27 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:01.634 ************************************ 00:06:01.634 END TEST accel_dif_verify 00:06:01.634 ************************************ 00:06:01.634 00:21:27 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:01.634 00:21:27 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:01.634 00:21:27 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:01.634 00:21:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.634 ************************************ 00:06:01.634 START TEST accel_dif_generate 00:06:01.634 ************************************ 00:06:01.634 00:21:27 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:01.634 00:21:27 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:01.634 [2024-05-15 00:21:27.669315] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:01.634 [2024-05-15 00:21:27.669387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760584 ] 00:06:01.634 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.634 [2024-05-15 00:21:27.743273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.893 [2024-05-15 00:21:27.867382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.893 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:01.894 00:21:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:03.268 00:21:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.269 00:21:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:03.269 00:21:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.269 00:06:03.269 real 0m1.501s 00:06:03.269 user 0m1.350s 00:06:03.269 sys 0m0.155s 00:06:03.269 00:21:29 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:03.269 00:21:29 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:03.269 ************************************ 00:06:03.269 END TEST accel_dif_generate 00:06:03.269 ************************************ 00:06:03.269 00:21:29 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:03.269 00:21:29 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:03.269 00:21:29 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:03.269 00:21:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.269 ************************************ 00:06:03.269 START TEST accel_dif_generate_copy 00:06:03.269 ************************************ 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:03.269 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:03.269 [2024-05-15 00:21:29.222266] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:03.269 [2024-05-15 00:21:29.222332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760739 ] 00:06:03.269 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.269 [2024-05-15 00:21:29.301333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.269 [2024-05-15 00:21:29.424842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:03.527 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.528 00:21:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.911 00:06:04.911 real 0m1.507s 00:06:04.911 user 0m1.346s 00:06:04.911 sys 0m0.163s 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:04.911 00:21:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:04.911 ************************************ 00:06:04.911 END TEST accel_dif_generate_copy 00:06:04.911 ************************************ 00:06:04.911 00:21:30 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:04.911 00:21:30 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.911 00:21:30 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:04.911 00:21:30 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:04.911 00:21:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.911 ************************************ 00:06:04.911 START TEST accel_comp 00:06:04.911 ************************************ 00:06:04.911 00:21:30 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:04.911 00:21:30 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:04.911 [2024-05-15 00:21:30.782841] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:04.911 [2024-05-15 00:21:30.782907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760898 ] 00:06:04.911 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.911 [2024-05-15 00:21:30.856060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.911 [2024-05-15 00:21:30.978926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.911 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:04.912 00:21:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:06.286 00:21:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.286 00:06:06.286 real 0m1.492s 00:06:06.286 user 0m1.340s 00:06:06.286 sys 0m0.155s 00:06:06.286 00:21:32 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:06.286 00:21:32 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:06.286 ************************************ 00:06:06.286 END TEST accel_comp 00:06:06.286 ************************************ 00:06:06.286 00:21:32 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.286 00:21:32 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:06.286 00:21:32 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:06.286 00:21:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.286 ************************************ 00:06:06.286 START TEST accel_decomp 00:06:06.286 ************************************ 00:06:06.286 00:21:32 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:06.286 00:21:32 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:06.286 [2024-05-15 00:21:32.323077] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:06.287 [2024-05-15 00:21:32.323143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761174 ] 00:06:06.287 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.287 [2024-05-15 00:21:32.396369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.545 [2024-05-15 00:21:32.520513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:06.545 00:21:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:07.914 00:21:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.914 00:06:07.914 real 0m1.504s 00:06:07.914 user 0m1.356s 00:06:07.914 sys 0m0.151s 00:06:07.914 00:21:33 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:07.914 00:21:33 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:07.914 ************************************ 00:06:07.914 END TEST accel_decomp 00:06:07.914 ************************************ 00:06:07.914 00:21:33 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:07.914 00:21:33 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:07.914 00:21:33 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:07.914 00:21:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.914 ************************************ 00:06:07.914 START TEST accel_decmop_full 00:06:07.914 ************************************ 00:06:07.914 00:21:33 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:07.914 00:21:33 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:07.914 [2024-05-15 00:21:33.879870] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:07.914 [2024-05-15 00:21:33.879945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761333 ] 00:06:07.914 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.914 [2024-05-15 00:21:33.953706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.914 [2024-05-15 00:21:34.076525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.173 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:08.173 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.173 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.173 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.173 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:08.173 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:08.174 00:21:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.548 00:21:35 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.548 00:06:09.548 real 0m1.515s 00:06:09.548 user 0m1.369s 00:06:09.548 sys 0m0.149s 00:06:09.548 00:21:35 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:09.548 00:21:35 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:09.548 ************************************ 00:06:09.548 END TEST accel_decmop_full 00:06:09.548 ************************************ 00:06:09.549 00:21:35 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:09.549 00:21:35 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:09.549 00:21:35 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:09.549 00:21:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.549 ************************************ 00:06:09.549 START TEST accel_decomp_mcore 00:06:09.549 ************************************ 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:09.549 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:09.549 [2024-05-15 00:21:35.448372] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:09.549 [2024-05-15 00:21:35.448441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761514 ] 00:06:09.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.549 [2024-05-15 00:21:35.524080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.549 [2024-05-15 00:21:35.650868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.549 [2024-05-15 00:21:35.650923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.549 [2024-05-15 00:21:35.650960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.549 [2024-05-15 00:21:35.650965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.807 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:09.808 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.808 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.808 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.808 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.808 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.808 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.808 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.808 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:09.808 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.808 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.808 00:21:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.182 00:06:11.182 real 0m1.503s 00:06:11.182 user 0m4.786s 00:06:11.182 sys 0m0.165s 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:11.182 00:21:36 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:11.182 ************************************ 00:06:11.182 END TEST accel_decomp_mcore 00:06:11.182 ************************************ 00:06:11.182 00:21:36 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:11.182 00:21:36 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:11.182 00:21:36 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:11.182 00:21:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.182 ************************************ 00:06:11.182 START TEST accel_decomp_full_mcore 00:06:11.182 ************************************ 00:06:11.182 00:21:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:11.182 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:11.182 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:11.182 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.182 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:11.182 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.182 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:11.182 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:11.182 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.183 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.183 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.183 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.183 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.183 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:11.183 00:21:36 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:11.183 [2024-05-15 00:21:37.006862] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:11.183 [2024-05-15 00:21:37.006944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761770 ] 00:06:11.183 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.183 [2024-05-15 00:21:37.082725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.183 [2024-05-15 00:21:37.208783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.183 [2024-05-15 00:21:37.208838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.183 [2024-05-15 00:21:37.208892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.183 [2024-05-15 00:21:37.208896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.183 00:21:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.557 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.557 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.558 00:06:12.558 real 0m1.525s 00:06:12.558 user 0m4.852s 00:06:12.558 sys 0m0.166s 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:12.558 00:21:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:12.558 ************************************ 00:06:12.558 END TEST accel_decomp_full_mcore 00:06:12.558 ************************************ 00:06:12.558 00:21:38 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:12.558 00:21:38 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:12.558 00:21:38 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:12.558 00:21:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.558 ************************************ 00:06:12.558 START TEST accel_decomp_mthread 00:06:12.558 ************************************ 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:12.558 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:12.558 [2024-05-15 00:21:38.587743] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:12.558 [2024-05-15 00:21:38.587810] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761933 ] 00:06:12.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.558 [2024-05-15 00:21:38.662520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.849 [2024-05-15 00:21:38.783535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.849 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.850 00:21:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.224 00:06:14.224 real 0m1.510s 00:06:14.224 user 0m1.356s 00:06:14.224 sys 0m0.157s 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:14.224 00:21:40 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:14.224 ************************************ 00:06:14.224 END TEST accel_decomp_mthread 00:06:14.224 ************************************ 00:06:14.224 00:21:40 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.224 00:21:40 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:14.224 00:21:40 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:14.224 00:21:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.224 ************************************ 00:06:14.224 START TEST accel_decomp_full_mthread 00:06:14.224 ************************************ 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:14.224 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:14.224 [2024-05-15 00:21:40.149324] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:14.224 [2024-05-15 00:21:40.149390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762202 ] 00:06:14.224 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.224 [2024-05-15 00:21:40.224251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.224 [2024-05-15 00:21:40.349426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.482 00:21:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.856 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.856 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.856 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.856 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.856 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.856 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.856 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.856 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.857 00:06:15.857 real 0m1.536s 00:06:15.857 user 0m1.382s 00:06:15.857 sys 0m0.156s 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:15.857 00:21:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:15.857 ************************************ 00:06:15.857 END TEST accel_decomp_full_mthread 00:06:15.857 ************************************ 00:06:15.857 00:21:41 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:15.857 00:21:41 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:15.857 00:21:41 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:15.857 00:21:41 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.857 00:21:41 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:15.857 00:21:41 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:15.857 00:21:41 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.857 00:21:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.857 00:21:41 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.857 00:21:41 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.857 00:21:41 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.857 00:21:41 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:15.857 00:21:41 accel -- accel/accel.sh@41 -- # jq -r . 00:06:15.857 ************************************ 00:06:15.857 START TEST accel_dif_functional_tests 00:06:15.857 ************************************ 00:06:15.857 00:21:41 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:15.857 [2024-05-15 00:21:41.753776] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:15.857 [2024-05-15 00:21:41.753835] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762371 ] 00:06:15.857 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.857 [2024-05-15 00:21:41.825462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.857 [2024-05-15 00:21:41.955043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.857 [2024-05-15 00:21:41.955094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.857 [2024-05-15 00:21:41.955098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.115 00:06:16.115 00:06:16.115 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.115 http://cunit.sourceforge.net/ 00:06:16.115 00:06:16.115 00:06:16.115 Suite: accel_dif 00:06:16.115 Test: verify: DIF generated, GUARD check ...passed 00:06:16.115 Test: verify: DIF generated, APPTAG check ...passed 00:06:16.115 Test: verify: DIF generated, REFTAG check ...passed 00:06:16.115 Test: verify: DIF not generated, GUARD check ...[2024-05-15 00:21:42.057781] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:16.115 [2024-05-15 00:21:42.057850] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:16.115 passed 00:06:16.115 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 00:21:42.057894] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:16.115 [2024-05-15 00:21:42.057925] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:16.115 passed 00:06:16.115 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 00:21:42.057970] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:16.115 [2024-05-15 00:21:42.058004] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:16.115 passed 00:06:16.115 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:16.115 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 00:21:42.058076] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:16.115 passed 00:06:16.115 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:16.115 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:16.115 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:16.115 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 00:21:42.058233] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:16.115 passed 00:06:16.115 Test: generate copy: DIF generated, GUARD check ...passed 00:06:16.115 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:16.115 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:16.115 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:16.115 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:16.115 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:16.115 Test: generate copy: iovecs-len validate ...[2024-05-15 00:21:42.058494] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:16.115 passed 00:06:16.115 Test: generate copy: buffer alignment validate ...passed 00:06:16.115 00:06:16.115 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.115 suites 1 1 n/a 0 0 00:06:16.115 tests 20 20 20 0 0 00:06:16.115 asserts 204 204 204 0 n/a 00:06:16.115 00:06:16.115 Elapsed time = 0.003 seconds 00:06:16.374 00:06:16.374 real 0m0.611s 00:06:16.374 user 0m0.914s 00:06:16.374 sys 0m0.198s 00:06:16.374 00:21:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:16.374 00:21:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:16.374 ************************************ 00:06:16.374 END TEST accel_dif_functional_tests 00:06:16.374 ************************************ 00:06:16.374 00:06:16.374 real 0m33.934s 00:06:16.374 user 0m37.052s 00:06:16.374 sys 0m4.897s 00:06:16.374 00:21:42 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:16.374 00:21:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.374 ************************************ 00:06:16.374 END TEST accel 00:06:16.374 ************************************ 00:06:16.374 00:21:42 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:16.374 00:21:42 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:16.374 00:21:42 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:16.374 00:21:42 -- common/autotest_common.sh@10 -- # set +x 00:06:16.374 ************************************ 00:06:16.374 START TEST accel_rpc 00:06:16.374 ************************************ 00:06:16.374 00:21:42 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:16.374 * Looking for test storage... 00:06:16.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:16.374 00:21:42 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.374 00:21:42 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=762550 00:06:16.374 00:21:42 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:16.374 00:21:42 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 762550 00:06:16.374 00:21:42 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 762550 ']' 00:06:16.374 00:21:42 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.374 00:21:42 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:16.374 00:21:42 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.374 00:21:42 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:16.374 00:21:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.374 [2024-05-15 00:21:42.512809] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:16.374 [2024-05-15 00:21:42.512889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762550 ] 00:06:16.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.632 [2024-05-15 00:21:42.581953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.632 [2024-05-15 00:21:42.688261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.632 00:21:42 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:16.632 00:21:42 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:16.632 00:21:42 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:16.632 00:21:42 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:16.632 00:21:42 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:16.632 00:21:42 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:16.632 00:21:42 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:16.632 00:21:42 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:16.632 00:21:42 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:16.632 00:21:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.633 ************************************ 00:06:16.633 START TEST accel_assign_opcode 00:06:16.633 ************************************ 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:16.633 [2024-05-15 00:21:42.744824] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:16.633 [2024-05-15 00:21:42.752838] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:16.633 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:16.891 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:16.891 00:21:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:16.891 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:16.891 00:21:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:16.891 00:21:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:16.891 00:21:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:16.891 00:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:16.891 software 00:06:16.891 00:06:16.891 real 0m0.293s 00:06:16.891 user 0m0.040s 00:06:16.891 sys 0m0.006s 00:06:16.891 00:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:16.891 00:21:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:16.891 ************************************ 00:06:16.891 END TEST accel_assign_opcode 00:06:16.891 ************************************ 00:06:16.891 00:21:43 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 762550 00:06:16.891 00:21:43 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 762550 ']' 00:06:16.891 00:21:43 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 762550 00:06:17.149 00:21:43 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:06:17.149 00:21:43 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:17.149 00:21:43 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 762550 00:06:17.149 00:21:43 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:17.149 00:21:43 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:17.149 00:21:43 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 762550' 00:06:17.149 killing process with pid 762550 00:06:17.149 00:21:43 accel_rpc -- common/autotest_common.sh@966 -- # kill 762550 00:06:17.150 00:21:43 accel_rpc -- common/autotest_common.sh@971 -- # wait 762550 00:06:17.409 00:06:17.409 real 0m1.158s 00:06:17.409 user 0m1.074s 00:06:17.409 sys 0m0.443s 00:06:17.409 00:21:43 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:17.409 00:21:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.409 ************************************ 00:06:17.409 END TEST accel_rpc 00:06:17.409 ************************************ 00:06:17.667 00:21:43 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:17.667 00:21:43 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:17.667 00:21:43 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:17.667 00:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:17.667 ************************************ 00:06:17.667 START TEST app_cmdline 00:06:17.667 ************************************ 00:06:17.667 00:21:43 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:17.667 * Looking for test storage... 00:06:17.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:17.667 00:21:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:17.667 00:21:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=762764 00:06:17.668 00:21:43 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:17.668 00:21:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 762764 00:06:17.668 00:21:43 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 762764 ']' 00:06:17.668 00:21:43 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.668 00:21:43 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:17.668 00:21:43 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.668 00:21:43 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:17.668 00:21:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.668 [2024-05-15 00:21:43.720112] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:17.668 [2024-05-15 00:21:43.720218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762764 ] 00:06:17.668 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.668 [2024-05-15 00:21:43.791494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.925 [2024-05-15 00:21:43.910850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.183 00:21:44 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:18.183 00:21:44 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:06:18.183 00:21:44 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:18.440 { 00:06:18.440 "version": "SPDK v24.05-pre git sha1 68960dff2", 00:06:18.440 "fields": { 00:06:18.440 "major": 24, 00:06:18.440 "minor": 5, 00:06:18.440 "patch": 0, 00:06:18.440 "suffix": "-pre", 00:06:18.440 "commit": "68960dff2" 00:06:18.440 } 00:06:18.440 } 00:06:18.440 00:21:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:18.440 00:21:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:18.440 00:21:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:18.440 00:21:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:18.440 00:21:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:18.440 00:21:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.440 00:21:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:18.440 00:21:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:18.440 00:21:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:18.440 00:21:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:18.440 00:21:44 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.697 request: 00:06:18.697 { 00:06:18.697 "method": "env_dpdk_get_mem_stats", 00:06:18.697 "req_id": 1 00:06:18.697 } 00:06:18.697 Got JSON-RPC error response 00:06:18.697 response: 00:06:18.697 { 00:06:18.697 "code": -32601, 00:06:18.697 "message": "Method not found" 00:06:18.697 } 00:06:18.697 00:21:44 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:18.697 00:21:44 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:18.697 00:21:44 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:18.697 00:21:44 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:18.697 00:21:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 762764 00:06:18.697 00:21:44 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 762764 ']' 00:06:18.697 00:21:44 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 762764 00:06:18.697 00:21:44 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:06:18.697 00:21:44 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:18.698 00:21:44 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 762764 00:06:18.698 00:21:44 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:18.698 00:21:44 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:18.698 00:21:44 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 762764' 00:06:18.698 killing process with pid 762764 00:06:18.698 00:21:44 app_cmdline -- common/autotest_common.sh@966 -- # kill 762764 00:06:18.698 00:21:44 app_cmdline -- common/autotest_common.sh@971 -- # wait 762764 00:06:19.264 00:06:19.264 real 0m1.714s 00:06:19.264 user 0m2.102s 00:06:19.264 sys 0m0.517s 00:06:19.264 00:21:45 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:19.264 00:21:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.264 ************************************ 00:06:19.264 END TEST app_cmdline 00:06:19.264 ************************************ 00:06:19.264 00:21:45 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:19.264 00:21:45 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:19.264 00:21:45 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:19.264 00:21:45 -- common/autotest_common.sh@10 -- # set +x 00:06:19.264 ************************************ 00:06:19.264 START TEST version 00:06:19.264 ************************************ 00:06:19.264 00:21:45 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:19.522 * Looking for test storage... 00:06:19.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:19.522 00:21:45 version -- app/version.sh@17 -- # get_header_version major 00:06:19.522 00:21:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.522 00:21:45 version -- app/version.sh@14 -- # cut -f2 00:06:19.522 00:21:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.522 00:21:45 version -- app/version.sh@17 -- # major=24 00:06:19.522 00:21:45 version -- app/version.sh@18 -- # get_header_version minor 00:06:19.522 00:21:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.522 00:21:45 version -- app/version.sh@14 -- # cut -f2 00:06:19.522 00:21:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.522 00:21:45 version -- app/version.sh@18 -- # minor=5 00:06:19.522 00:21:45 version -- app/version.sh@19 -- # get_header_version patch 00:06:19.522 00:21:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.522 00:21:45 version -- app/version.sh@14 -- # cut -f2 00:06:19.522 00:21:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.522 00:21:45 version -- app/version.sh@19 -- # patch=0 00:06:19.522 00:21:45 version -- app/version.sh@20 -- # get_header_version suffix 00:06:19.522 00:21:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.522 00:21:45 version -- app/version.sh@14 -- # cut -f2 00:06:19.522 00:21:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.522 00:21:45 version -- app/version.sh@20 -- # suffix=-pre 00:06:19.522 00:21:45 version -- app/version.sh@22 -- # version=24.5 00:06:19.522 00:21:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:19.522 00:21:45 version -- app/version.sh@28 -- # version=24.5rc0 00:06:19.522 00:21:45 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:19.522 00:21:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:19.522 00:21:45 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:19.522 00:21:45 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:19.522 00:06:19.522 real 0m0.106s 00:06:19.522 user 0m0.061s 00:06:19.522 sys 0m0.068s 00:06:19.522 00:21:45 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:19.522 00:21:45 version -- common/autotest_common.sh@10 -- # set +x 00:06:19.522 ************************************ 00:06:19.522 END TEST version 00:06:19.522 ************************************ 00:06:19.522 00:21:45 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:19.522 00:21:45 -- spdk/autotest.sh@194 -- # uname -s 00:06:19.522 00:21:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:19.522 00:21:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.522 00:21:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.522 00:21:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:19.522 00:21:45 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:19.522 00:21:45 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:19.522 00:21:45 -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:19.522 00:21:45 -- common/autotest_common.sh@10 -- # set +x 00:06:19.522 00:21:45 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:19.522 00:21:45 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:19.522 00:21:45 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:06:19.522 00:21:45 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:06:19.522 00:21:45 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:06:19.522 00:21:45 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:06:19.522 00:21:45 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:19.522 00:21:45 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:19.522 00:21:45 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:19.522 00:21:45 -- common/autotest_common.sh@10 -- # set +x 00:06:19.522 ************************************ 00:06:19.522 START TEST nvmf_tcp 00:06:19.522 ************************************ 00:06:19.522 00:21:45 nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:19.522 * Looking for test storage... 00:06:19.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:19.522 00:21:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:19.522 00:21:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:19.522 00:21:45 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.522 00:21:45 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:19.522 00:21:45 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.522 00:21:45 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.523 00:21:45 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.523 00:21:45 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.523 00:21:45 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.523 00:21:45 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.523 00:21:45 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.523 00:21:45 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.523 00:21:45 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:19.523 00:21:45 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:19.523 00:21:45 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:19.523 00:21:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:19.523 00:21:45 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:19.523 00:21:45 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:19.523 00:21:45 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:19.523 00:21:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.523 ************************************ 00:06:19.523 START TEST nvmf_example 00:06:19.523 ************************************ 00:06:19.523 00:21:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:19.783 * Looking for test storage... 00:06:19.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.783 00:21:45 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:19.784 00:21:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:22.315 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.315 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:22.316 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:22.316 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:22.316 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:22.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:06:22.316 00:06:22.316 --- 10.0.0.2 ping statistics --- 00:06:22.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.316 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:06:22.316 00:06:22.316 --- 10.0.0.1 ping statistics --- 00:06:22.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.316 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=765077 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 765077 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # '[' -z 765077 ']' 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:22.316 00:21:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.575 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@861 -- # return 0 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:23.509 00:21:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:23.509 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.706 Initializing NVMe Controllers 00:06:35.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:35.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:35.706 Initialization complete. Launching workers. 00:06:35.706 ======================================================== 00:06:35.706 Latency(us) 00:06:35.706 Device Information : IOPS MiB/s Average min max 00:06:35.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13351.38 52.15 4793.01 922.74 15208.21 00:06:35.706 ======================================================== 00:06:35.706 Total : 13351.38 52.15 4793.01 922.74 15208.21 00:06:35.706 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:35.706 rmmod nvme_tcp 00:06:35.706 rmmod nvme_fabrics 00:06:35.706 rmmod nvme_keyring 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 765077 ']' 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 765077 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' -z 765077 ']' 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # kill -0 765077 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # uname 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 765077 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # process_name=nvmf 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@957 -- # '[' nvmf = sudo ']' 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # echo 'killing process with pid 765077' 00:06:35.706 killing process with pid 765077 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # kill 765077 00:06:35.706 00:21:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@971 -- # wait 765077 00:06:35.706 nvmf threads initialize successfully 00:06:35.706 bdev subsystem init successfully 00:06:35.706 created a nvmf target service 00:06:35.706 create targets's poll groups done 00:06:35.706 all subsystems of target started 00:06:35.706 nvmf target is running 00:06:35.706 all subsystems of target stopped 00:06:35.706 destroy targets's poll groups done 00:06:35.706 destroyed the nvmf target service 00:06:35.706 bdev subsystem finish successfully 00:06:35.706 nvmf threads destroy successfully 00:06:35.706 00:22:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:35.706 00:22:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:35.707 00:22:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:35.707 00:22:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:35.707 00:22:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:35.707 00:22:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.707 00:22:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:35.707 00:22:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.338 00:22:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:36.338 00:22:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:36.338 00:22:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:36.338 00:22:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:36.338 00:06:36.338 real 0m16.560s 00:06:36.338 user 0m39.951s 00:06:36.338 sys 0m5.379s 00:06:36.338 00:22:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:36.338 00:22:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:36.338 ************************************ 00:06:36.338 END TEST nvmf_example 00:06:36.338 ************************************ 00:06:36.338 00:22:02 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:36.338 00:22:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:36.338 00:22:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:36.338 00:22:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.338 ************************************ 00:06:36.338 START TEST nvmf_filesystem 00:06:36.338 ************************************ 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:36.338 * Looking for test storage... 00:06:36.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:36.338 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:36.339 #define SPDK_CONFIG_H 00:06:36.339 #define SPDK_CONFIG_APPS 1 00:06:36.339 #define SPDK_CONFIG_ARCH native 00:06:36.339 #undef SPDK_CONFIG_ASAN 00:06:36.339 #undef SPDK_CONFIG_AVAHI 00:06:36.339 #undef SPDK_CONFIG_CET 00:06:36.339 #define SPDK_CONFIG_COVERAGE 1 00:06:36.339 #define SPDK_CONFIG_CROSS_PREFIX 00:06:36.339 #undef SPDK_CONFIG_CRYPTO 00:06:36.339 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:36.339 #undef SPDK_CONFIG_CUSTOMOCF 00:06:36.339 #undef SPDK_CONFIG_DAOS 00:06:36.339 #define SPDK_CONFIG_DAOS_DIR 00:06:36.339 #define SPDK_CONFIG_DEBUG 1 00:06:36.339 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:36.339 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:36.339 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:36.339 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:36.339 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:36.339 #undef SPDK_CONFIG_DPDK_UADK 00:06:36.339 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:36.339 #define SPDK_CONFIG_EXAMPLES 1 00:06:36.339 #undef SPDK_CONFIG_FC 00:06:36.339 #define SPDK_CONFIG_FC_PATH 00:06:36.339 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:36.339 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:36.339 #undef SPDK_CONFIG_FUSE 00:06:36.339 #undef SPDK_CONFIG_FUZZER 00:06:36.339 #define SPDK_CONFIG_FUZZER_LIB 00:06:36.339 #undef SPDK_CONFIG_GOLANG 00:06:36.339 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:36.339 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:36.339 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:36.339 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:36.339 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:36.339 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:36.339 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:36.339 #define SPDK_CONFIG_IDXD 1 00:06:36.339 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:36.339 #undef SPDK_CONFIG_IPSEC_MB 00:06:36.339 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:36.339 #define SPDK_CONFIG_ISAL 1 00:06:36.339 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:36.339 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:36.339 #define SPDK_CONFIG_LIBDIR 00:06:36.339 #undef SPDK_CONFIG_LTO 00:06:36.339 #define SPDK_CONFIG_MAX_LCORES 00:06:36.339 #define SPDK_CONFIG_NVME_CUSE 1 00:06:36.339 #undef SPDK_CONFIG_OCF 00:06:36.339 #define SPDK_CONFIG_OCF_PATH 00:06:36.339 #define SPDK_CONFIG_OPENSSL_PATH 00:06:36.339 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:36.339 #define SPDK_CONFIG_PGO_DIR 00:06:36.339 #undef SPDK_CONFIG_PGO_USE 00:06:36.339 #define SPDK_CONFIG_PREFIX /usr/local 00:06:36.339 #undef SPDK_CONFIG_RAID5F 00:06:36.339 #undef SPDK_CONFIG_RBD 00:06:36.339 #define SPDK_CONFIG_RDMA 1 00:06:36.339 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:36.339 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:36.339 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:36.339 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:36.339 #define SPDK_CONFIG_SHARED 1 00:06:36.339 #undef SPDK_CONFIG_SMA 00:06:36.339 #define SPDK_CONFIG_TESTS 1 00:06:36.339 #undef SPDK_CONFIG_TSAN 00:06:36.339 #define SPDK_CONFIG_UBLK 1 00:06:36.339 #define SPDK_CONFIG_UBSAN 1 00:06:36.339 #undef SPDK_CONFIG_UNIT_TESTS 00:06:36.339 #undef SPDK_CONFIG_URING 00:06:36.339 #define SPDK_CONFIG_URING_PATH 00:06:36.339 #undef SPDK_CONFIG_URING_ZNS 00:06:36.339 #undef SPDK_CONFIG_USDT 00:06:36.339 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:36.339 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:36.339 #define SPDK_CONFIG_VFIO_USER 1 00:06:36.339 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:36.339 #define SPDK_CONFIG_VHOST 1 00:06:36.339 #define SPDK_CONFIG_VIRTIO 1 00:06:36.339 #undef SPDK_CONFIG_VTUNE 00:06:36.339 #define SPDK_CONFIG_VTUNE_DIR 00:06:36.339 #define SPDK_CONFIG_WERROR 1 00:06:36.339 #define SPDK_CONFIG_WPDK_DIR 00:06:36.339 #undef SPDK_CONFIG_XNVME 00:06:36.339 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:36.339 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:36.340 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 766796 ]] 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 766796 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.fJNfpZ 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.fJNfpZ/tests/target /tmp/spdk.fJNfpZ 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=968667136 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4315762688 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:36.341 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=48323293184 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994729472 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=13671436288 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941728768 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12389986304 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8962048 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30995836928 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1527808 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:36.342 * Looking for test storage... 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=48323293184 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=15886028800 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set -o errtrace 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # true 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # xtrace_fd 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.342 00:22:02 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:36.343 00:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:38.872 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:38.872 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.872 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:38.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:38.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:38.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:06:38.873 00:06:38.873 --- 10.0.0.2 ping statistics --- 00:06:38.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.873 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:06:38.873 00:06:38.873 --- 10.0.0.1 ping statistics --- 00:06:38.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.873 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:38.873 00:22:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:38.873 00:22:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:38.873 ************************************ 00:06:38.873 START TEST nvmf_filesystem_no_in_capsule 00:06:38.873 ************************************ 00:06:38.873 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 0 00:06:38.873 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:38.873 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:38.873 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:38.873 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:38.873 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.873 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=768730 00:06:38.873 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:38.873 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 768730 00:06:38.873 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 768730 ']' 00:06:39.132 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.132 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:39.132 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.132 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:39.132 00:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.132 [2024-05-15 00:22:05.082773] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:39.132 [2024-05-15 00:22:05.082859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.132 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.132 [2024-05-15 00:22:05.164690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.132 [2024-05-15 00:22:05.288094] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.132 [2024-05-15 00:22:05.288153] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.132 [2024-05-15 00:22:05.288178] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.132 [2024-05-15 00:22:05.288191] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.132 [2024-05-15 00:22:05.288203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.132 [2024-05-15 00:22:05.288308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.132 [2024-05-15 00:22:05.288362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.132 [2024-05-15 00:22:05.288419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.132 [2024-05-15 00:22:05.288422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.067 [2024-05-15 00:22:06.097174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.067 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.326 Malloc1 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.326 [2024-05-15 00:22:06.280628] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:40.326 [2024-05-15 00:22:06.280941] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:06:40.326 { 00:06:40.326 "name": "Malloc1", 00:06:40.326 "aliases": [ 00:06:40.326 "7eb7d265-3bb1-4859-9da7-50ca438bd39d" 00:06:40.326 ], 00:06:40.326 "product_name": "Malloc disk", 00:06:40.326 "block_size": 512, 00:06:40.326 "num_blocks": 1048576, 00:06:40.326 "uuid": "7eb7d265-3bb1-4859-9da7-50ca438bd39d", 00:06:40.326 "assigned_rate_limits": { 00:06:40.326 "rw_ios_per_sec": 0, 00:06:40.326 "rw_mbytes_per_sec": 0, 00:06:40.326 "r_mbytes_per_sec": 0, 00:06:40.326 "w_mbytes_per_sec": 0 00:06:40.326 }, 00:06:40.326 "claimed": true, 00:06:40.326 "claim_type": "exclusive_write", 00:06:40.326 "zoned": false, 00:06:40.326 "supported_io_types": { 00:06:40.326 "read": true, 00:06:40.326 "write": true, 00:06:40.326 "unmap": true, 00:06:40.326 "write_zeroes": true, 00:06:40.326 "flush": true, 00:06:40.326 "reset": true, 00:06:40.326 "compare": false, 00:06:40.326 "compare_and_write": false, 00:06:40.326 "abort": true, 00:06:40.326 "nvme_admin": false, 00:06:40.326 "nvme_io": false 00:06:40.326 }, 00:06:40.326 "memory_domains": [ 00:06:40.326 { 00:06:40.326 "dma_device_id": "system", 00:06:40.326 "dma_device_type": 1 00:06:40.326 }, 00:06:40.326 { 00:06:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.326 "dma_device_type": 2 00:06:40.326 } 00:06:40.326 ], 00:06:40.326 "driver_specific": {} 00:06:40.326 } 00:06:40.326 ]' 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:40.326 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:40.892 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:40.892 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:06:40.892 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:06:40.892 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:06:40.892 00:22:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:43.421 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:43.422 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:43.422 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:43.422 00:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:43.422 00:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:44.356 00:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.290 ************************************ 00:06:45.290 START TEST filesystem_ext4 00:06:45.290 ************************************ 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local force 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:06:45.290 00:22:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:45.290 mke2fs 1.46.5 (30-Dec-2021) 00:06:45.290 Discarding device blocks: 0/522240 done 00:06:45.290 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:45.290 Filesystem UUID: 02fbb087-1b61-4fa6-84f0-d960b718317b 00:06:45.290 Superblock backups stored on blocks: 00:06:45.290 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:45.290 00:06:45.290 Allocating group tables: 0/64 done 00:06:45.290 Writing inode tables: 0/64 done 00:06:48.570 Creating journal (8192 blocks): done 00:06:49.134 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:06:49.134 00:06:49.134 00:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@942 -- # return 0 00:06:49.134 00:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:50.065 00:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:50.065 00:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:50.065 00:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:50.065 00:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:50.065 00:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:50.065 00:22:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 768730 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:50.065 00:06:50.065 real 0m4.702s 00:06:50.065 user 0m0.015s 00:06:50.065 sys 0m0.034s 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:50.065 ************************************ 00:06:50.065 END TEST filesystem_ext4 00:06:50.065 ************************************ 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.065 ************************************ 00:06:50.065 START TEST filesystem_btrfs 00:06:50.065 ************************************ 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local force 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:06:50.065 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:50.323 btrfs-progs v6.6.2 00:06:50.323 See https://btrfs.readthedocs.io for more information. 00:06:50.323 00:06:50.323 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:50.323 NOTE: several default settings have changed in version 5.15, please make sure 00:06:50.323 this does not affect your deployments: 00:06:50.323 - DUP for metadata (-m dup) 00:06:50.323 - enabled no-holes (-O no-holes) 00:06:50.323 - enabled free-space-tree (-R free-space-tree) 00:06:50.323 00:06:50.323 Label: (null) 00:06:50.323 UUID: 8660c336-8626-490f-b391-01ca27c69573 00:06:50.323 Node size: 16384 00:06:50.323 Sector size: 4096 00:06:50.323 Filesystem size: 510.00MiB 00:06:50.323 Block group profiles: 00:06:50.323 Data: single 8.00MiB 00:06:50.323 Metadata: DUP 32.00MiB 00:06:50.323 System: DUP 8.00MiB 00:06:50.323 SSD detected: yes 00:06:50.323 Zoned device: no 00:06:50.323 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:50.323 Runtime features: free-space-tree 00:06:50.323 Checksum: crc32c 00:06:50.323 Number of devices: 1 00:06:50.323 Devices: 00:06:50.323 ID SIZE PATH 00:06:50.323 1 510.00MiB /dev/nvme0n1p1 00:06:50.323 00:06:50.323 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@942 -- # return 0 00:06:50.323 00:22:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:51.257 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:51.257 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:51.257 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:51.257 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:51.257 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:51.257 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:51.257 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 768730 00:06:51.257 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:51.257 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:51.257 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:51.257 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:51.257 00:06:51.257 real 0m1.098s 00:06:51.258 user 0m0.009s 00:06:51.258 sys 0m0.049s 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:51.258 ************************************ 00:06:51.258 END TEST filesystem_btrfs 00:06:51.258 ************************************ 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.258 ************************************ 00:06:51.258 START TEST filesystem_xfs 00:06:51.258 ************************************ 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local i=0 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local force 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # force=-f 00:06:51.258 00:22:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:51.258 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:51.258 = sectsz=512 attr=2, projid32bit=1 00:06:51.258 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:51.258 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:51.258 data = bsize=4096 blocks=130560, imaxpct=25 00:06:51.258 = sunit=0 swidth=0 blks 00:06:51.258 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:51.258 log =internal log bsize=4096 blocks=16384, version=2 00:06:51.258 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:51.258 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:52.190 Discarding blocks...Done. 00:06:52.190 00:22:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@942 -- # return 0 00:06:52.190 00:22:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 768730 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:54.751 00:06:54.751 real 0m3.609s 00:06:54.751 user 0m0.014s 00:06:54.751 sys 0m0.035s 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:54.751 ************************************ 00:06:54.751 END TEST filesystem_xfs 00:06:54.751 ************************************ 00:06:54.751 00:22:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:55.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.009 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 768730 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 768730 ']' 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # kill -0 768730 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # uname 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 768730 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 768730' 00:06:55.267 killing process with pid 768730 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # kill 768730 00:06:55.267 [2024-05-15 00:22:21.211859] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:55.267 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # wait 768730 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:55.834 00:06:55.834 real 0m16.680s 00:06:55.834 user 1m4.153s 00:06:55.834 sys 0m2.185s 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.834 ************************************ 00:06:55.834 END TEST nvmf_filesystem_no_in_capsule 00:06:55.834 ************************************ 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.834 ************************************ 00:06:55.834 START TEST nvmf_filesystem_in_capsule 00:06:55.834 ************************************ 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 4096 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=770944 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 770944 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 770944 ']' 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:55.834 00:22:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.834 [2024-05-15 00:22:21.818605] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:55.834 [2024-05-15 00:22:21.818682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.834 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.834 [2024-05-15 00:22:21.895877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.093 [2024-05-15 00:22:22.003520] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.093 [2024-05-15 00:22:22.003562] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.093 [2024-05-15 00:22:22.003576] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.093 [2024-05-15 00:22:22.003593] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.093 [2024-05-15 00:22:22.003604] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.093 [2024-05-15 00:22:22.003683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.093 [2024-05-15 00:22:22.003748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.093 [2024-05-15 00:22:22.003815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.093 [2024-05-15 00:22:22.003818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:56.093 [2024-05-15 00:22:22.154763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.093 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:56.351 Malloc1 00:06:56.351 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.351 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:56.351 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.351 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:56.351 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.351 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:56.351 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.351 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:56.351 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.351 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:56.352 [2024-05-15 00:22:22.340555] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:56.352 [2024-05-15 00:22:22.340835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:06:56.352 { 00:06:56.352 "name": "Malloc1", 00:06:56.352 "aliases": [ 00:06:56.352 "f5f6a7a1-e831-4214-872a-ce2c23f662f9" 00:06:56.352 ], 00:06:56.352 "product_name": "Malloc disk", 00:06:56.352 "block_size": 512, 00:06:56.352 "num_blocks": 1048576, 00:06:56.352 "uuid": "f5f6a7a1-e831-4214-872a-ce2c23f662f9", 00:06:56.352 "assigned_rate_limits": { 00:06:56.352 "rw_ios_per_sec": 0, 00:06:56.352 "rw_mbytes_per_sec": 0, 00:06:56.352 "r_mbytes_per_sec": 0, 00:06:56.352 "w_mbytes_per_sec": 0 00:06:56.352 }, 00:06:56.352 "claimed": true, 00:06:56.352 "claim_type": "exclusive_write", 00:06:56.352 "zoned": false, 00:06:56.352 "supported_io_types": { 00:06:56.352 "read": true, 00:06:56.352 "write": true, 00:06:56.352 "unmap": true, 00:06:56.352 "write_zeroes": true, 00:06:56.352 "flush": true, 00:06:56.352 "reset": true, 00:06:56.352 "compare": false, 00:06:56.352 "compare_and_write": false, 00:06:56.352 "abort": true, 00:06:56.352 "nvme_admin": false, 00:06:56.352 "nvme_io": false 00:06:56.352 }, 00:06:56.352 "memory_domains": [ 00:06:56.352 { 00:06:56.352 "dma_device_id": "system", 00:06:56.352 "dma_device_type": 1 00:06:56.352 }, 00:06:56.352 { 00:06:56.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.352 "dma_device_type": 2 00:06:56.352 } 00:06:56.352 ], 00:06:56.352 "driver_specific": {} 00:06:56.352 } 00:06:56.352 ]' 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:56.352 00:22:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:56.917 00:22:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:56.917 00:22:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:06:56.917 00:22:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:06:56.917 00:22:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:06:56.917 00:22:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:59.448 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:59.705 00:22:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.639 ************************************ 00:07:00.639 START TEST filesystem_in_capsule_ext4 00:07:00.639 ************************************ 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local force 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:07:00.639 00:22:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:00.639 mke2fs 1.46.5 (30-Dec-2021) 00:07:00.639 Discarding device blocks: 0/522240 done 00:07:00.639 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:00.639 Filesystem UUID: 90328a33-25b9-4d2e-9b75-bc7d203a7f6b 00:07:00.639 Superblock backups stored on blocks: 00:07:00.639 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:00.639 00:07:00.639 Allocating group tables: 0/64 done 00:07:00.639 Writing inode tables: 0/64 done 00:07:01.571 Creating journal (8192 blocks): done 00:07:01.572 Writing superblocks and filesystem accounting information: 0/64 done 00:07:01.572 00:07:01.572 00:22:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@942 -- # return 0 00:07:01.572 00:22:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:02.137 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 770944 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:02.395 00:07:02.395 real 0m1.718s 00:07:02.395 user 0m0.018s 00:07:02.395 sys 0m0.037s 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:02.395 ************************************ 00:07:02.395 END TEST filesystem_in_capsule_ext4 00:07:02.395 ************************************ 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:02.395 ************************************ 00:07:02.395 START TEST filesystem_in_capsule_btrfs 00:07:02.395 ************************************ 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local force 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:07:02.395 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:02.653 btrfs-progs v6.6.2 00:07:02.653 See https://btrfs.readthedocs.io for more information. 00:07:02.653 00:07:02.653 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:02.653 NOTE: several default settings have changed in version 5.15, please make sure 00:07:02.653 this does not affect your deployments: 00:07:02.653 - DUP for metadata (-m dup) 00:07:02.653 - enabled no-holes (-O no-holes) 00:07:02.653 - enabled free-space-tree (-R free-space-tree) 00:07:02.653 00:07:02.653 Label: (null) 00:07:02.653 UUID: 8a4b830e-2641-4dab-9e57-3e344d0b8f68 00:07:02.653 Node size: 16384 00:07:02.653 Sector size: 4096 00:07:02.653 Filesystem size: 510.00MiB 00:07:02.653 Block group profiles: 00:07:02.653 Data: single 8.00MiB 00:07:02.653 Metadata: DUP 32.00MiB 00:07:02.653 System: DUP 8.00MiB 00:07:02.653 SSD detected: yes 00:07:02.653 Zoned device: no 00:07:02.653 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:02.653 Runtime features: free-space-tree 00:07:02.653 Checksum: crc32c 00:07:02.653 Number of devices: 1 00:07:02.653 Devices: 00:07:02.653 ID SIZE PATH 00:07:02.653 1 510.00MiB /dev/nvme0n1p1 00:07:02.653 00:07:02.653 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@942 -- # return 0 00:07:02.653 00:22:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 770944 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:03.587 00:07:03.587 real 0m1.002s 00:07:03.587 user 0m0.019s 00:07:03.587 sys 0m0.045s 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:03.587 ************************************ 00:07:03.587 END TEST filesystem_in_capsule_btrfs 00:07:03.587 ************************************ 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.587 ************************************ 00:07:03.587 START TEST filesystem_in_capsule_xfs 00:07:03.587 ************************************ 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local i=0 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local force 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # force=-f 00:07:03.587 00:22:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:03.587 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:03.587 = sectsz=512 attr=2, projid32bit=1 00:07:03.587 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:03.587 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:03.587 data = bsize=4096 blocks=130560, imaxpct=25 00:07:03.587 = sunit=0 swidth=0 blks 00:07:03.587 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:03.587 log =internal log bsize=4096 blocks=16384, version=2 00:07:03.587 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:03.587 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:04.520 Discarding blocks...Done. 00:07:04.520 00:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@942 -- # return 0 00:07:04.520 00:22:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 770944 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.049 00:07:07.049 real 0m3.448s 00:07:07.049 user 0m0.022s 00:07:07.049 sys 0m0.034s 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:07.049 ************************************ 00:07:07.049 END TEST filesystem_in_capsule_xfs 00:07:07.049 ************************************ 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:07.049 00:22:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 770944 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 770944 ']' 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # kill -0 770944 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # uname 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:07.049 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 770944 00:07:07.050 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:07.050 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:07.050 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 770944' 00:07:07.050 killing process with pid 770944 00:07:07.050 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # kill 770944 00:07:07.050 [2024-05-15 00:22:33.098261] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:07.050 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # wait 770944 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:07.615 00:07:07.615 real 0m11.823s 00:07:07.615 user 0m45.130s 00:07:07.615 sys 0m1.672s 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.615 ************************************ 00:07:07.615 END TEST nvmf_filesystem_in_capsule 00:07:07.615 ************************************ 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:07.615 rmmod nvme_tcp 00:07:07.615 rmmod nvme_fabrics 00:07:07.615 rmmod nvme_keyring 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.615 00:22:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.186 00:22:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:10.186 00:07:10.186 real 0m33.431s 00:07:10.186 user 1m50.347s 00:07:10.186 sys 0m5.741s 00:07:10.186 00:22:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:10.186 00:22:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.186 ************************************ 00:07:10.186 END TEST nvmf_filesystem 00:07:10.186 ************************************ 00:07:10.186 00:22:35 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:10.186 00:22:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:10.186 00:22:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:10.186 00:22:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.186 ************************************ 00:07:10.186 START TEST nvmf_target_discovery 00:07:10.186 ************************************ 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:10.186 * Looking for test storage... 00:07:10.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.186 00:22:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:10.187 00:22:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:12.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:12.748 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:12.748 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:12.748 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.748 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:12.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:07:12.749 00:07:12.749 --- 10.0.0.2 ping statistics --- 00:07:12.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.749 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:07:12.749 00:07:12.749 --- 10.0.0.1 ping statistics --- 00:07:12.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.749 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=774843 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 774843 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # '[' -z 774843 ']' 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:12.749 00:22:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:12.749 [2024-05-15 00:22:38.668605] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:12.749 [2024-05-15 00:22:38.668685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.749 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.749 [2024-05-15 00:22:38.746354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.749 [2024-05-15 00:22:38.862175] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.749 [2024-05-15 00:22:38.862227] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.749 [2024-05-15 00:22:38.862241] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.749 [2024-05-15 00:22:38.862252] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.749 [2024-05-15 00:22:38.862263] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.749 [2024-05-15 00:22:38.862318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.749 [2024-05-15 00:22:38.862381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.749 [2024-05-15 00:22:38.862440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.749 [2024-05-15 00:22:38.862442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@861 -- # return 0 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 [2024-05-15 00:22:39.700090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 Null1 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 [2024-05-15 00:22:39.740159] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:13.684 [2024-05-15 00:22:39.740420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 Null2 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.684 Null3 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.684 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.685 Null4 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.685 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.943 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.943 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:13.943 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.943 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.943 00:22:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.943 00:22:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:13.943 00:07:13.943 Discovery Log Number of Records 6, Generation counter 6 00:07:13.943 =====Discovery Log Entry 0====== 00:07:13.943 trtype: tcp 00:07:13.943 adrfam: ipv4 00:07:13.943 subtype: current discovery subsystem 00:07:13.943 treq: not required 00:07:13.943 portid: 0 00:07:13.943 trsvcid: 4420 00:07:13.943 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:13.943 traddr: 10.0.0.2 00:07:13.943 eflags: explicit discovery connections, duplicate discovery information 00:07:13.943 sectype: none 00:07:13.943 =====Discovery Log Entry 1====== 00:07:13.943 trtype: tcp 00:07:13.943 adrfam: ipv4 00:07:13.943 subtype: nvme subsystem 00:07:13.943 treq: not required 00:07:13.943 portid: 0 00:07:13.943 trsvcid: 4420 00:07:13.943 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:13.943 traddr: 10.0.0.2 00:07:13.943 eflags: none 00:07:13.943 sectype: none 00:07:13.943 =====Discovery Log Entry 2====== 00:07:13.944 trtype: tcp 00:07:13.944 adrfam: ipv4 00:07:13.944 subtype: nvme subsystem 00:07:13.944 treq: not required 00:07:13.944 portid: 0 00:07:13.944 trsvcid: 4420 00:07:13.944 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:13.944 traddr: 10.0.0.2 00:07:13.944 eflags: none 00:07:13.944 sectype: none 00:07:13.944 =====Discovery Log Entry 3====== 00:07:13.944 trtype: tcp 00:07:13.944 adrfam: ipv4 00:07:13.944 subtype: nvme subsystem 00:07:13.944 treq: not required 00:07:13.944 portid: 0 00:07:13.944 trsvcid: 4420 00:07:13.944 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:13.944 traddr: 10.0.0.2 00:07:13.944 eflags: none 00:07:13.944 sectype: none 00:07:13.944 =====Discovery Log Entry 4====== 00:07:13.944 trtype: tcp 00:07:13.944 adrfam: ipv4 00:07:13.944 subtype: nvme subsystem 00:07:13.944 treq: not required 00:07:13.944 portid: 0 00:07:13.944 trsvcid: 4420 00:07:13.944 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:13.944 traddr: 10.0.0.2 00:07:13.944 eflags: none 00:07:13.944 sectype: none 00:07:13.944 =====Discovery Log Entry 5====== 00:07:13.944 trtype: tcp 00:07:13.944 adrfam: ipv4 00:07:13.944 subtype: discovery subsystem referral 00:07:13.944 treq: not required 00:07:13.944 portid: 0 00:07:13.944 trsvcid: 4430 00:07:13.944 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:13.944 traddr: 10.0.0.2 00:07:13.944 eflags: none 00:07:13.944 sectype: none 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:13.944 Perform nvmf subsystem discovery via RPC 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.944 [ 00:07:13.944 { 00:07:13.944 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:13.944 "subtype": "Discovery", 00:07:13.944 "listen_addresses": [ 00:07:13.944 { 00:07:13.944 "trtype": "TCP", 00:07:13.944 "adrfam": "IPv4", 00:07:13.944 "traddr": "10.0.0.2", 00:07:13.944 "trsvcid": "4420" 00:07:13.944 } 00:07:13.944 ], 00:07:13.944 "allow_any_host": true, 00:07:13.944 "hosts": [] 00:07:13.944 }, 00:07:13.944 { 00:07:13.944 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:13.944 "subtype": "NVMe", 00:07:13.944 "listen_addresses": [ 00:07:13.944 { 00:07:13.944 "trtype": "TCP", 00:07:13.944 "adrfam": "IPv4", 00:07:13.944 "traddr": "10.0.0.2", 00:07:13.944 "trsvcid": "4420" 00:07:13.944 } 00:07:13.944 ], 00:07:13.944 "allow_any_host": true, 00:07:13.944 "hosts": [], 00:07:13.944 "serial_number": "SPDK00000000000001", 00:07:13.944 "model_number": "SPDK bdev Controller", 00:07:13.944 "max_namespaces": 32, 00:07:13.944 "min_cntlid": 1, 00:07:13.944 "max_cntlid": 65519, 00:07:13.944 "namespaces": [ 00:07:13.944 { 00:07:13.944 "nsid": 1, 00:07:13.944 "bdev_name": "Null1", 00:07:13.944 "name": "Null1", 00:07:13.944 "nguid": "2E1501094C8E490E9ED716B04D34F7FD", 00:07:13.944 "uuid": "2e150109-4c8e-490e-9ed7-16b04d34f7fd" 00:07:13.944 } 00:07:13.944 ] 00:07:13.944 }, 00:07:13.944 { 00:07:13.944 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:13.944 "subtype": "NVMe", 00:07:13.944 "listen_addresses": [ 00:07:13.944 { 00:07:13.944 "trtype": "TCP", 00:07:13.944 "adrfam": "IPv4", 00:07:13.944 "traddr": "10.0.0.2", 00:07:13.944 "trsvcid": "4420" 00:07:13.944 } 00:07:13.944 ], 00:07:13.944 "allow_any_host": true, 00:07:13.944 "hosts": [], 00:07:13.944 "serial_number": "SPDK00000000000002", 00:07:13.944 "model_number": "SPDK bdev Controller", 00:07:13.944 "max_namespaces": 32, 00:07:13.944 "min_cntlid": 1, 00:07:13.944 "max_cntlid": 65519, 00:07:13.944 "namespaces": [ 00:07:13.944 { 00:07:13.944 "nsid": 1, 00:07:13.944 "bdev_name": "Null2", 00:07:13.944 "name": "Null2", 00:07:13.944 "nguid": "EC6BE533B7384124AFF66DC799032342", 00:07:13.944 "uuid": "ec6be533-b738-4124-aff6-6dc799032342" 00:07:13.944 } 00:07:13.944 ] 00:07:13.944 }, 00:07:13.944 { 00:07:13.944 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:13.944 "subtype": "NVMe", 00:07:13.944 "listen_addresses": [ 00:07:13.944 { 00:07:13.944 "trtype": "TCP", 00:07:13.944 "adrfam": "IPv4", 00:07:13.944 "traddr": "10.0.0.2", 00:07:13.944 "trsvcid": "4420" 00:07:13.944 } 00:07:13.944 ], 00:07:13.944 "allow_any_host": true, 00:07:13.944 "hosts": [], 00:07:13.944 "serial_number": "SPDK00000000000003", 00:07:13.944 "model_number": "SPDK bdev Controller", 00:07:13.944 "max_namespaces": 32, 00:07:13.944 "min_cntlid": 1, 00:07:13.944 "max_cntlid": 65519, 00:07:13.944 "namespaces": [ 00:07:13.944 { 00:07:13.944 "nsid": 1, 00:07:13.944 "bdev_name": "Null3", 00:07:13.944 "name": "Null3", 00:07:13.944 "nguid": "5E7DD69018E9411C9CD02AE074C553DD", 00:07:13.944 "uuid": "5e7dd690-18e9-411c-9cd0-2ae074c553dd" 00:07:13.944 } 00:07:13.944 ] 00:07:13.944 }, 00:07:13.944 { 00:07:13.944 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:13.944 "subtype": "NVMe", 00:07:13.944 "listen_addresses": [ 00:07:13.944 { 00:07:13.944 "trtype": "TCP", 00:07:13.944 "adrfam": "IPv4", 00:07:13.944 "traddr": "10.0.0.2", 00:07:13.944 "trsvcid": "4420" 00:07:13.944 } 00:07:13.944 ], 00:07:13.944 "allow_any_host": true, 00:07:13.944 "hosts": [], 00:07:13.944 "serial_number": "SPDK00000000000004", 00:07:13.944 "model_number": "SPDK bdev Controller", 00:07:13.944 "max_namespaces": 32, 00:07:13.944 "min_cntlid": 1, 00:07:13.944 "max_cntlid": 65519, 00:07:13.944 "namespaces": [ 00:07:13.944 { 00:07:13.944 "nsid": 1, 00:07:13.944 "bdev_name": "Null4", 00:07:13.944 "name": "Null4", 00:07:13.944 "nguid": "C955B88EEB8D4A7596DD62AA43882CB3", 00:07:13.944 "uuid": "c955b88e-eb8d-4a75-96dd-62aa43882cb3" 00:07:13.944 } 00:07:13.944 ] 00:07:13.944 } 00:07:13.944 ] 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.944 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:14.206 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.206 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:14.206 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:14.206 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.207 rmmod nvme_tcp 00:07:14.207 rmmod nvme_fabrics 00:07:14.207 rmmod nvme_keyring 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 774843 ']' 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 774843 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' -z 774843 ']' 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # kill -0 774843 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # uname 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 774843 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 774843' 00:07:14.207 killing process with pid 774843 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # kill 774843 00:07:14.207 [2024-05-15 00:22:40.256912] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:14.207 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@971 -- # wait 774843 00:07:14.466 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:14.466 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:14.466 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:14.466 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.466 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.466 00:22:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.466 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.466 00:22:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.004 00:22:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:17.004 00:07:17.004 real 0m6.820s 00:07:17.004 user 0m7.770s 00:07:17.004 sys 0m2.302s 00:07:17.004 00:22:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:17.004 00:22:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.004 ************************************ 00:07:17.004 END TEST nvmf_target_discovery 00:07:17.004 ************************************ 00:07:17.004 00:22:42 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:17.004 00:22:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:17.004 00:22:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:17.004 00:22:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.004 ************************************ 00:07:17.004 START TEST nvmf_referrals 00:07:17.004 ************************************ 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:17.004 * Looking for test storage... 00:07:17.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.004 00:22:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:19.537 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:19.537 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:19.537 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:19.537 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:19.537 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:19.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:07:19.538 00:07:19.538 --- 10.0.0.2 ping statistics --- 00:07:19.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.538 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:07:19.538 00:07:19.538 --- 10.0.0.1 ping statistics --- 00:07:19.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.538 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=777363 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 777363 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # '[' -z 777363 ']' 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:19.538 00:22:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:19.538 [2024-05-15 00:22:45.441890] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:19.538 [2024-05-15 00:22:45.442015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.538 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.538 [2024-05-15 00:22:45.523852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.538 [2024-05-15 00:22:45.645063] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.538 [2024-05-15 00:22:45.645124] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.538 [2024-05-15 00:22:45.645150] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.538 [2024-05-15 00:22:45.645164] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.538 [2024-05-15 00:22:45.645177] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.538 [2024-05-15 00:22:45.645272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.538 [2024-05-15 00:22:45.645329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.538 [2024-05-15 00:22:45.645382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.538 [2024-05-15 00:22:45.645385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@861 -- # return 0 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.472 [2024-05-15 00:22:46.420056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.472 [2024-05-15 00:22:46.432030] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:20.472 [2024-05-15 00:22:46.432397] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:20.472 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.730 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:20.731 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:20.988 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:20.988 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:20.988 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:20.988 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:20.988 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:20.988 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:20.988 00:22:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:20.988 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:20.988 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:20.988 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:20.988 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:20.989 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:20.989 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:21.246 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:21.247 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:21.247 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:21.247 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:21.247 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:21.247 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:21.247 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:21.504 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:21.505 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:21.505 rmmod nvme_tcp 00:07:21.505 rmmod nvme_fabrics 00:07:21.505 rmmod nvme_keyring 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 777363 ']' 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 777363 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' -z 777363 ']' 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # kill -0 777363 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # uname 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 777363 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # echo 'killing process with pid 777363' 00:07:21.763 killing process with pid 777363 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # kill 777363 00:07:21.763 [2024-05-15 00:22:47.713658] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:21.763 00:22:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@971 -- # wait 777363 00:07:22.023 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:22.023 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:22.023 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:22.023 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:22.023 00:22:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:22.023 00:22:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.023 00:22:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.023 00:22:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.931 00:22:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:23.931 00:07:23.931 real 0m7.416s 00:07:23.931 user 0m11.018s 00:07:23.931 sys 0m2.336s 00:07:23.931 00:22:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:23.931 00:22:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:23.931 ************************************ 00:07:23.931 END TEST nvmf_referrals 00:07:23.931 ************************************ 00:07:23.931 00:22:50 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:23.931 00:22:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:23.931 00:22:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:23.931 00:22:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.189 ************************************ 00:07:24.189 START TEST nvmf_connect_disconnect 00:07:24.189 ************************************ 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:24.189 * Looking for test storage... 00:07:24.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.189 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:24.190 00:22:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:26.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:26.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:26.716 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:26.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:26.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:26.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:07:26.717 00:07:26.717 --- 10.0.0.2 ping statistics --- 00:07:26.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.717 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:07:26.717 00:07:26.717 --- 10.0.0.1 ping statistics --- 00:07:26.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.717 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=779954 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 779954 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # '[' -z 779954 ']' 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:26.717 00:22:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:26.976 [2024-05-15 00:22:52.897759] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:26.976 [2024-05-15 00:22:52.897853] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.976 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.976 [2024-05-15 00:22:52.980020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.976 [2024-05-15 00:22:53.103551] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.976 [2024-05-15 00:22:53.103613] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.976 [2024-05-15 00:22:53.103629] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.976 [2024-05-15 00:22:53.103643] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.976 [2024-05-15 00:22:53.103654] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.976 [2024-05-15 00:22:53.103772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.976 [2024-05-15 00:22:53.103792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.976 [2024-05-15 00:22:53.103844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.976 [2024-05-15 00:22:53.103847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # return 0 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:27.936 [2024-05-15 00:22:53.864984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:27.936 [2024-05-15 00:22:53.926159] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:27.936 [2024-05-15 00:22:53.926492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:27.936 00:22:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:31.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:41.328 rmmod nvme_tcp 00:07:41.328 rmmod nvme_fabrics 00:07:41.328 rmmod nvme_keyring 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 779954 ']' 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 779954 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' -z 779954 ']' 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # kill -0 779954 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # uname 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 779954 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 779954' 00:07:41.328 killing process with pid 779954 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # kill 779954 00:07:41.328 [2024-05-15 00:23:07.467085] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:41.328 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # wait 779954 00:07:41.894 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:41.894 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:41.894 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:41.894 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:41.894 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:41.894 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.894 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.894 00:23:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.801 00:23:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:43.801 00:07:43.801 real 0m19.713s 00:07:43.801 user 0m58.350s 00:07:43.801 sys 0m3.531s 00:07:43.801 00:23:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:43.801 00:23:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:43.801 ************************************ 00:07:43.801 END TEST nvmf_connect_disconnect 00:07:43.801 ************************************ 00:07:43.801 00:23:09 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:43.801 00:23:09 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:43.801 00:23:09 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:43.801 00:23:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.801 ************************************ 00:07:43.801 START TEST nvmf_multitarget 00:07:43.801 ************************************ 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:43.801 * Looking for test storage... 00:07:43.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.801 00:23:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.802 00:23:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:46.336 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.336 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:46.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:46.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:46.337 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:46.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:07:46.337 00:07:46.337 --- 10.0.0.2 ping statistics --- 00:07:46.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.337 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:07:46.337 00:07:46.337 --- 10.0.0.1 ping statistics --- 00:07:46.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.337 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.337 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=784018 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 784018 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # '[' -z 784018 ']' 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:46.596 00:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:46.596 [2024-05-15 00:23:12.563120] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:46.596 [2024-05-15 00:23:12.563222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.596 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.596 [2024-05-15 00:23:12.646403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.854 [2024-05-15 00:23:12.769817] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.854 [2024-05-15 00:23:12.769877] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.854 [2024-05-15 00:23:12.769893] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.854 [2024-05-15 00:23:12.769907] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.854 [2024-05-15 00:23:12.769918] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.854 [2024-05-15 00:23:12.770006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.854 [2024-05-15 00:23:12.770066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.854 [2024-05-15 00:23:12.770121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.854 [2024-05-15 00:23:12.770124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.452 00:23:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:47.452 00:23:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@861 -- # return 0 00:07:47.452 00:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:47.452 00:23:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:47.452 00:23:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:47.452 00:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.452 00:23:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:47.452 00:23:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:47.452 00:23:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:47.716 00:23:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:47.716 00:23:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:47.716 "nvmf_tgt_1" 00:07:47.716 00:23:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:47.974 "nvmf_tgt_2" 00:07:47.974 00:23:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:47.974 00:23:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:47.974 00:23:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:47.974 00:23:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:48.232 true 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:48.232 true 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.232 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.232 rmmod nvme_tcp 00:07:48.232 rmmod nvme_fabrics 00:07:48.490 rmmod nvme_keyring 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 784018 ']' 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 784018 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' -z 784018 ']' 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # kill -0 784018 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # uname 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 784018 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # echo 'killing process with pid 784018' 00:07:48.490 killing process with pid 784018 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # kill 784018 00:07:48.490 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@971 -- # wait 784018 00:07:48.750 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.750 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.750 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.750 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.750 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.750 00:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.750 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.750 00:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.655 00:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:50.655 00:07:50.655 real 0m6.901s 00:07:50.655 user 0m9.530s 00:07:50.655 sys 0m2.280s 00:07:50.655 00:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:50.655 00:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:50.655 ************************************ 00:07:50.655 END TEST nvmf_multitarget 00:07:50.655 ************************************ 00:07:50.655 00:23:16 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:50.655 00:23:16 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:50.655 00:23:16 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:50.655 00:23:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.913 ************************************ 00:07:50.913 START TEST nvmf_rpc 00:07:50.913 ************************************ 00:07:50.913 00:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:50.913 * Looking for test storage... 00:07:50.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.913 00:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.913 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:50.913 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:50.914 00:23:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:53.445 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:53.445 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:53.445 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:53.445 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:53.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:07:53.445 00:07:53.445 --- 10.0.0.2 ping statistics --- 00:07:53.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.445 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:07:53.445 00:07:53.445 --- 10.0.0.1 ping statistics --- 00:07:53.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.445 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=786659 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 786659 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # '[' -z 786659 ']' 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:53.445 00:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.445 [2024-05-15 00:23:19.584807] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:53.445 [2024-05-15 00:23:19.584885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.703 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.703 [2024-05-15 00:23:19.662810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.703 [2024-05-15 00:23:19.774159] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.703 [2024-05-15 00:23:19.774217] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.703 [2024-05-15 00:23:19.774246] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.703 [2024-05-15 00:23:19.774257] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.703 [2024-05-15 00:23:19.774266] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.703 [2024-05-15 00:23:19.774317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.703 [2024-05-15 00:23:19.774373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.703 [2024-05-15 00:23:19.774438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.703 [2024-05-15 00:23:19.774441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@861 -- # return 0 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:54.632 "tick_rate": 2700000000, 00:07:54.632 "poll_groups": [ 00:07:54.632 { 00:07:54.632 "name": "nvmf_tgt_poll_group_000", 00:07:54.632 "admin_qpairs": 0, 00:07:54.632 "io_qpairs": 0, 00:07:54.632 "current_admin_qpairs": 0, 00:07:54.632 "current_io_qpairs": 0, 00:07:54.632 "pending_bdev_io": 0, 00:07:54.632 "completed_nvme_io": 0, 00:07:54.632 "transports": [] 00:07:54.632 }, 00:07:54.632 { 00:07:54.632 "name": "nvmf_tgt_poll_group_001", 00:07:54.632 "admin_qpairs": 0, 00:07:54.632 "io_qpairs": 0, 00:07:54.632 "current_admin_qpairs": 0, 00:07:54.632 "current_io_qpairs": 0, 00:07:54.632 "pending_bdev_io": 0, 00:07:54.632 "completed_nvme_io": 0, 00:07:54.632 "transports": [] 00:07:54.632 }, 00:07:54.632 { 00:07:54.632 "name": "nvmf_tgt_poll_group_002", 00:07:54.632 "admin_qpairs": 0, 00:07:54.632 "io_qpairs": 0, 00:07:54.632 "current_admin_qpairs": 0, 00:07:54.632 "current_io_qpairs": 0, 00:07:54.632 "pending_bdev_io": 0, 00:07:54.632 "completed_nvme_io": 0, 00:07:54.632 "transports": [] 00:07:54.632 }, 00:07:54.632 { 00:07:54.632 "name": "nvmf_tgt_poll_group_003", 00:07:54.632 "admin_qpairs": 0, 00:07:54.632 "io_qpairs": 0, 00:07:54.632 "current_admin_qpairs": 0, 00:07:54.632 "current_io_qpairs": 0, 00:07:54.632 "pending_bdev_io": 0, 00:07:54.632 "completed_nvme_io": 0, 00:07:54.632 "transports": [] 00:07:54.632 } 00:07:54.632 ] 00:07:54.632 }' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 [2024-05-15 00:23:20.673251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:54.632 "tick_rate": 2700000000, 00:07:54.632 "poll_groups": [ 00:07:54.632 { 00:07:54.632 "name": "nvmf_tgt_poll_group_000", 00:07:54.632 "admin_qpairs": 0, 00:07:54.632 "io_qpairs": 0, 00:07:54.632 "current_admin_qpairs": 0, 00:07:54.632 "current_io_qpairs": 0, 00:07:54.632 "pending_bdev_io": 0, 00:07:54.632 "completed_nvme_io": 0, 00:07:54.632 "transports": [ 00:07:54.632 { 00:07:54.632 "trtype": "TCP" 00:07:54.632 } 00:07:54.632 ] 00:07:54.632 }, 00:07:54.632 { 00:07:54.632 "name": "nvmf_tgt_poll_group_001", 00:07:54.632 "admin_qpairs": 0, 00:07:54.632 "io_qpairs": 0, 00:07:54.632 "current_admin_qpairs": 0, 00:07:54.632 "current_io_qpairs": 0, 00:07:54.632 "pending_bdev_io": 0, 00:07:54.632 "completed_nvme_io": 0, 00:07:54.632 "transports": [ 00:07:54.632 { 00:07:54.632 "trtype": "TCP" 00:07:54.632 } 00:07:54.632 ] 00:07:54.632 }, 00:07:54.632 { 00:07:54.632 "name": "nvmf_tgt_poll_group_002", 00:07:54.632 "admin_qpairs": 0, 00:07:54.632 "io_qpairs": 0, 00:07:54.632 "current_admin_qpairs": 0, 00:07:54.632 "current_io_qpairs": 0, 00:07:54.632 "pending_bdev_io": 0, 00:07:54.632 "completed_nvme_io": 0, 00:07:54.632 "transports": [ 00:07:54.632 { 00:07:54.632 "trtype": "TCP" 00:07:54.632 } 00:07:54.632 ] 00:07:54.632 }, 00:07:54.632 { 00:07:54.632 "name": "nvmf_tgt_poll_group_003", 00:07:54.632 "admin_qpairs": 0, 00:07:54.632 "io_qpairs": 0, 00:07:54.632 "current_admin_qpairs": 0, 00:07:54.632 "current_io_qpairs": 0, 00:07:54.632 "pending_bdev_io": 0, 00:07:54.632 "completed_nvme_io": 0, 00:07:54.632 "transports": [ 00:07:54.632 { 00:07:54.632 "trtype": "TCP" 00:07:54.632 } 00:07:54.632 ] 00:07:54.632 } 00:07:54.632 ] 00:07:54.632 }' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.632 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.889 Malloc1 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.889 [2024-05-15 00:23:20.826979] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:54.889 [2024-05-15 00:23:20.827291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:54.889 [2024-05-15 00:23:20.849845] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:54.889 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:54.889 could not add new controller: failed to write to nvme-fabrics device 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.889 00:23:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.454 00:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.454 00:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:07:55.454 00:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.454 00:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:55.454 00:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:07:57.350 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:57.351 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:57.351 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.351 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:57.351 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.351 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:07:57.351 00:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:57.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.608 [2024-05-15 00:23:23.619690] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:57.608 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:57.608 could not add new controller: failed to write to nvme-fabrics device 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.608 00:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:58.172 00:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:58.172 00:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:07:58.172 00:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:58.172 00:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:58.172 00:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:08:00.069 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:08:00.070 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:08:00.070 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:08:00.070 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:08:00.070 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:08:00.070 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:08:00.070 00:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:00.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.328 [2024-05-15 00:23:26.285093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.328 00:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.893 00:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.893 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:08:00.893 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.893 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:08:00.893 00:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:08:02.792 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:08:02.792 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:08:02.792 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:08:02.792 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:08:02.792 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:08:02.792 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:08:02.792 00:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:03.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.050 00:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:03.050 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:08:03.050 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.051 00:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.051 [2024-05-15 00:23:29.008306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.051 00:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:03.617 00:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:03.617 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:08:03.617 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:08:03.617 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:08:03.617 00:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:08:05.543 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:08:05.543 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:08:05.543 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.543 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:08:05.543 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.543 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:08:05.543 00:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:05.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.802 [2024-05-15 00:23:31.780255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.802 00:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:06.368 00:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:06.368 00:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:08:06.368 00:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:08:06.368 00:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:08:06.368 00:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:08.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.896 [2024-05-15 00:23:34.557306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.896 00:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:09.154 00:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:09.154 00:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:08:09.154 00:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:08:09.154 00:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:08:09.154 00:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:08:11.054 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:08:11.054 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:08:11.054 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:08:11.054 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:08:11.054 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:08:11.054 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:08:11.054 00:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:11.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.312 [2024-05-15 00:23:37.282253] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:11.312 00:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:11.878 00:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:11.878 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:08:11.878 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:08:11.878 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:08:11.878 00:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:08:14.409 00:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:08:14.409 00:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:08:14.409 00:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.409 00:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:08:14.409 00:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.409 00:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:08:14.409 00:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:14.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 [2024-05-15 00:23:40.141533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 [2024-05-15 00:23:40.189646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.409 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 [2024-05-15 00:23:40.237804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 [2024-05-15 00:23:40.285993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 [2024-05-15 00:23:40.334151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:14.410 "tick_rate": 2700000000, 00:08:14.410 "poll_groups": [ 00:08:14.410 { 00:08:14.410 "name": "nvmf_tgt_poll_group_000", 00:08:14.410 "admin_qpairs": 2, 00:08:14.410 "io_qpairs": 84, 00:08:14.410 "current_admin_qpairs": 0, 00:08:14.410 "current_io_qpairs": 0, 00:08:14.410 "pending_bdev_io": 0, 00:08:14.410 "completed_nvme_io": 182, 00:08:14.410 "transports": [ 00:08:14.410 { 00:08:14.410 "trtype": "TCP" 00:08:14.410 } 00:08:14.410 ] 00:08:14.410 }, 00:08:14.410 { 00:08:14.410 "name": "nvmf_tgt_poll_group_001", 00:08:14.410 "admin_qpairs": 2, 00:08:14.410 "io_qpairs": 84, 00:08:14.410 "current_admin_qpairs": 0, 00:08:14.410 "current_io_qpairs": 0, 00:08:14.410 "pending_bdev_io": 0, 00:08:14.410 "completed_nvme_io": 136, 00:08:14.410 "transports": [ 00:08:14.410 { 00:08:14.410 "trtype": "TCP" 00:08:14.410 } 00:08:14.410 ] 00:08:14.410 }, 00:08:14.410 { 00:08:14.410 "name": "nvmf_tgt_poll_group_002", 00:08:14.410 "admin_qpairs": 1, 00:08:14.410 "io_qpairs": 84, 00:08:14.410 "current_admin_qpairs": 0, 00:08:14.410 "current_io_qpairs": 0, 00:08:14.410 "pending_bdev_io": 0, 00:08:14.410 "completed_nvme_io": 184, 00:08:14.410 "transports": [ 00:08:14.410 { 00:08:14.410 "trtype": "TCP" 00:08:14.410 } 00:08:14.410 ] 00:08:14.410 }, 00:08:14.410 { 00:08:14.410 "name": "nvmf_tgt_poll_group_003", 00:08:14.410 "admin_qpairs": 2, 00:08:14.410 "io_qpairs": 84, 00:08:14.410 "current_admin_qpairs": 0, 00:08:14.410 "current_io_qpairs": 0, 00:08:14.410 "pending_bdev_io": 0, 00:08:14.410 "completed_nvme_io": 184, 00:08:14.410 "transports": [ 00:08:14.410 { 00:08:14.410 "trtype": "TCP" 00:08:14.410 } 00:08:14.410 ] 00:08:14.410 } 00:08:14.410 ] 00:08:14.410 }' 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.410 rmmod nvme_tcp 00:08:14.410 rmmod nvme_fabrics 00:08:14.410 rmmod nvme_keyring 00:08:14.410 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 786659 ']' 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 786659 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' -z 786659 ']' 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # kill -0 786659 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # uname 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 786659 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:08:14.411 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 786659' 00:08:14.411 killing process with pid 786659 00:08:14.669 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # kill 786659 00:08:14.669 [2024-05-15 00:23:40.572618] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:14.669 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@971 -- # wait 786659 00:08:14.928 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.928 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.928 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.928 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.928 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.928 00:23:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.928 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.928 00:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.836 00:23:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.836 00:08:16.836 real 0m26.098s 00:08:16.836 user 1m23.697s 00:08:16.836 sys 0m4.283s 00:08:16.836 00:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:16.836 00:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.836 ************************************ 00:08:16.836 END TEST nvmf_rpc 00:08:16.836 ************************************ 00:08:16.836 00:23:42 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:16.836 00:23:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:16.836 00:23:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:16.836 00:23:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.836 ************************************ 00:08:16.836 START TEST nvmf_invalid 00:08:16.836 ************************************ 00:08:16.836 00:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:17.095 * Looking for test storage... 00:08:17.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.095 00:23:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:19.627 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:19.627 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:19.627 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:19.627 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:08:19.627 00:08:19.627 --- 10.0.0.2 ping statistics --- 00:08:19.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.627 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:08:19.627 00:08:19.627 --- 10.0.0.1 ping statistics --- 00:08:19.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.627 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:19.627 00:23:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:19.886 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=791573 00:08:19.886 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.886 00:23:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 791573 00:08:19.886 00:23:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # '[' -z 791573 ']' 00:08:19.886 00:23:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.886 00:23:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:19.886 00:23:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.886 00:23:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:19.886 00:23:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:19.886 [2024-05-15 00:23:45.833776] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:08:19.886 [2024-05-15 00:23:45.833863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.886 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.886 [2024-05-15 00:23:45.915629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.886 [2024-05-15 00:23:46.042333] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.886 [2024-05-15 00:23:46.042398] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.886 [2024-05-15 00:23:46.042421] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.887 [2024-05-15 00:23:46.042435] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.887 [2024-05-15 00:23:46.042447] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.887 [2024-05-15 00:23:46.042517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.887 [2024-05-15 00:23:46.042551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.887 [2024-05-15 00:23:46.042603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.887 [2024-05-15 00:23:46.042606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.145 00:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:20.145 00:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@861 -- # return 0 00:08:20.145 00:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.145 00:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:20.145 00:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:20.145 00:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.145 00:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:20.145 00:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4044 00:08:20.403 [2024-05-15 00:23:46.482650] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:20.403 00:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:20.403 { 00:08:20.403 "nqn": "nqn.2016-06.io.spdk:cnode4044", 00:08:20.403 "tgt_name": "foobar", 00:08:20.403 "method": "nvmf_create_subsystem", 00:08:20.403 "req_id": 1 00:08:20.403 } 00:08:20.403 Got JSON-RPC error response 00:08:20.403 response: 00:08:20.403 { 00:08:20.403 "code": -32603, 00:08:20.403 "message": "Unable to find target foobar" 00:08:20.403 }' 00:08:20.403 00:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:20.403 { 00:08:20.403 "nqn": "nqn.2016-06.io.spdk:cnode4044", 00:08:20.403 "tgt_name": "foobar", 00:08:20.403 "method": "nvmf_create_subsystem", 00:08:20.403 "req_id": 1 00:08:20.403 } 00:08:20.403 Got JSON-RPC error response 00:08:20.403 response: 00:08:20.403 { 00:08:20.403 "code": -32603, 00:08:20.403 "message": "Unable to find target foobar" 00:08:20.403 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:20.403 00:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:20.403 00:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10729 00:08:20.661 [2024-05-15 00:23:46.735559] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10729: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:20.661 00:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:20.661 { 00:08:20.661 "nqn": "nqn.2016-06.io.spdk:cnode10729", 00:08:20.661 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:20.661 "method": "nvmf_create_subsystem", 00:08:20.661 "req_id": 1 00:08:20.661 } 00:08:20.661 Got JSON-RPC error response 00:08:20.661 response: 00:08:20.661 { 00:08:20.661 "code": -32602, 00:08:20.661 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:20.661 }' 00:08:20.661 00:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:20.661 { 00:08:20.661 "nqn": "nqn.2016-06.io.spdk:cnode10729", 00:08:20.661 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:20.661 "method": "nvmf_create_subsystem", 00:08:20.661 "req_id": 1 00:08:20.661 } 00:08:20.661 Got JSON-RPC error response 00:08:20.661 response: 00:08:20.661 { 00:08:20.661 "code": -32602, 00:08:20.661 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:20.661 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:20.661 00:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:20.661 00:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32446 00:08:20.919 [2024-05-15 00:23:46.996421] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32446: invalid model number 'SPDK_Controller' 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:20.919 { 00:08:20.919 "nqn": "nqn.2016-06.io.spdk:cnode32446", 00:08:20.919 "model_number": "SPDK_Controller\u001f", 00:08:20.919 "method": "nvmf_create_subsystem", 00:08:20.919 "req_id": 1 00:08:20.919 } 00:08:20.919 Got JSON-RPC error response 00:08:20.919 response: 00:08:20.919 { 00:08:20.919 "code": -32602, 00:08:20.919 "message": "Invalid MN SPDK_Controller\u001f" 00:08:20.919 }' 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:20.919 { 00:08:20.919 "nqn": "nqn.2016-06.io.spdk:cnode32446", 00:08:20.919 "model_number": "SPDK_Controller\u001f", 00:08:20.919 "method": "nvmf_create_subsystem", 00:08:20.919 "req_id": 1 00:08:20.919 } 00:08:20.919 Got JSON-RPC error response 00:08:20.919 response: 00:08:20.919 { 00:08:20.919 "code": -32602, 00:08:20.919 "message": "Invalid MN SPDK_Controller\u001f" 00:08:20.919 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.919 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:20.920 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ i == \- ]] 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'i)~^};Dv@)91F%O"P'\''xY' 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'i)~^};Dv@)91F%O"P'\''xY' nqn.2016-06.io.spdk:cnode18211 00:08:21.180 [2024-05-15 00:23:47.321561] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18211: invalid serial number 'i)~^};Dv@)91F%O"P'xY' 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:21.180 { 00:08:21.180 "nqn": "nqn.2016-06.io.spdk:cnode18211", 00:08:21.180 "serial_number": "i)~^};Dv@)91F%O\"P'\''xY\u007f", 00:08:21.180 "method": "nvmf_create_subsystem", 00:08:21.180 "req_id": 1 00:08:21.180 } 00:08:21.180 Got JSON-RPC error response 00:08:21.180 response: 00:08:21.180 { 00:08:21.180 "code": -32602, 00:08:21.180 "message": "Invalid SN i)~^};Dv@)91F%O\"P'\''xY\u007f" 00:08:21.180 }' 00:08:21.180 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:21.180 { 00:08:21.180 "nqn": "nqn.2016-06.io.spdk:cnode18211", 00:08:21.180 "serial_number": "i)~^};Dv@)91F%O\"P'xY\u007f", 00:08:21.180 "method": "nvmf_create_subsystem", 00:08:21.180 "req_id": 1 00:08:21.180 } 00:08:21.180 Got JSON-RPC error response 00:08:21.180 response: 00:08:21.180 { 00:08:21.180 "code": -32602, 00:08:21.180 "message": "Invalid SN i)~^};Dv@)91F%O\"P'xY\u007f" 00:08:21.180 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:21.472 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.473 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ J == \- ]] 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'J\8Zl`ajr/Jo' 00:08:21.474 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'J\8Zl`ajr/Jo' nqn.2016-06.io.spdk:cnode4088 00:08:21.732 [2024-05-15 00:23:47.718888] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4088: invalid model number 'J\8Zl`ajr/Jo' 00:08:21.732 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:21.732 { 00:08:21.732 "nqn": "nqn.2016-06.io.spdk:cnode4088", 00:08:21.732 "model_number": "J\\8Zl`ajr/Jo", 00:08:21.732 "method": "nvmf_create_subsystem", 00:08:21.732 "req_id": 1 00:08:21.732 } 00:08:21.732 Got JSON-RPC error response 00:08:21.732 response: 00:08:21.732 { 00:08:21.732 "code": -32602, 00:08:21.732 "message": "Invalid MN J\\8Zl`ajr/Jo" 00:08:21.732 }' 00:08:21.732 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:21.732 { 00:08:21.732 "nqn": "nqn.2016-06.io.spdk:cnode4088", 00:08:21.732 "model_number": "J\\8Zl`ajr/Jo", 00:08:21.732 "method": "nvmf_create_subsystem", 00:08:21.732 "req_id": 1 00:08:21.732 } 00:08:21.732 Got JSON-RPC error response 00:08:21.732 response: 00:08:21.732 { 00:08:21.732 "code": -32602, 00:08:21.732 "message": "Invalid MN J\\8Zl`ajr/Jo" 00:08:21.732 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:21.732 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:21.990 [2024-05-15 00:23:47.975806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.990 00:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:22.248 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:22.248 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:22.248 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:22.248 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:22.248 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:22.506 [2024-05-15 00:23:48.477379] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:22.506 [2024-05-15 00:23:48.477479] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:22.506 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:22.506 { 00:08:22.506 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:22.506 "listen_address": { 00:08:22.506 "trtype": "tcp", 00:08:22.506 "traddr": "", 00:08:22.506 "trsvcid": "4421" 00:08:22.506 }, 00:08:22.506 "method": "nvmf_subsystem_remove_listener", 00:08:22.506 "req_id": 1 00:08:22.506 } 00:08:22.506 Got JSON-RPC error response 00:08:22.506 response: 00:08:22.506 { 00:08:22.506 "code": -32602, 00:08:22.506 "message": "Invalid parameters" 00:08:22.506 }' 00:08:22.506 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:22.506 { 00:08:22.506 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:22.506 "listen_address": { 00:08:22.506 "trtype": "tcp", 00:08:22.506 "traddr": "", 00:08:22.506 "trsvcid": "4421" 00:08:22.506 }, 00:08:22.506 "method": "nvmf_subsystem_remove_listener", 00:08:22.506 "req_id": 1 00:08:22.506 } 00:08:22.506 Got JSON-RPC error response 00:08:22.506 response: 00:08:22.506 { 00:08:22.506 "code": -32602, 00:08:22.506 "message": "Invalid parameters" 00:08:22.506 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:22.506 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16427 -i 0 00:08:22.764 [2024-05-15 00:23:48.710131] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16427: invalid cntlid range [0-65519] 00:08:22.764 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:22.764 { 00:08:22.764 "nqn": "nqn.2016-06.io.spdk:cnode16427", 00:08:22.764 "min_cntlid": 0, 00:08:22.764 "method": "nvmf_create_subsystem", 00:08:22.764 "req_id": 1 00:08:22.764 } 00:08:22.764 Got JSON-RPC error response 00:08:22.764 response: 00:08:22.764 { 00:08:22.764 "code": -32602, 00:08:22.764 "message": "Invalid cntlid range [0-65519]" 00:08:22.764 }' 00:08:22.764 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:22.764 { 00:08:22.764 "nqn": "nqn.2016-06.io.spdk:cnode16427", 00:08:22.764 "min_cntlid": 0, 00:08:22.764 "method": "nvmf_create_subsystem", 00:08:22.764 "req_id": 1 00:08:22.764 } 00:08:22.764 Got JSON-RPC error response 00:08:22.764 response: 00:08:22.765 { 00:08:22.765 "code": -32602, 00:08:22.765 "message": "Invalid cntlid range [0-65519]" 00:08:22.765 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:22.765 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18144 -i 65520 00:08:23.022 [2024-05-15 00:23:48.946962] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18144: invalid cntlid range [65520-65519] 00:08:23.022 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:23.022 { 00:08:23.022 "nqn": "nqn.2016-06.io.spdk:cnode18144", 00:08:23.022 "min_cntlid": 65520, 00:08:23.022 "method": "nvmf_create_subsystem", 00:08:23.022 "req_id": 1 00:08:23.022 } 00:08:23.022 Got JSON-RPC error response 00:08:23.022 response: 00:08:23.022 { 00:08:23.022 "code": -32602, 00:08:23.022 "message": "Invalid cntlid range [65520-65519]" 00:08:23.022 }' 00:08:23.022 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:23.022 { 00:08:23.022 "nqn": "nqn.2016-06.io.spdk:cnode18144", 00:08:23.022 "min_cntlid": 65520, 00:08:23.022 "method": "nvmf_create_subsystem", 00:08:23.022 "req_id": 1 00:08:23.022 } 00:08:23.022 Got JSON-RPC error response 00:08:23.022 response: 00:08:23.022 { 00:08:23.022 "code": -32602, 00:08:23.022 "message": "Invalid cntlid range [65520-65519]" 00:08:23.022 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:23.022 00:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32236 -I 0 00:08:23.280 [2024-05-15 00:23:49.211830] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32236: invalid cntlid range [1-0] 00:08:23.280 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:23.280 { 00:08:23.280 "nqn": "nqn.2016-06.io.spdk:cnode32236", 00:08:23.280 "max_cntlid": 0, 00:08:23.280 "method": "nvmf_create_subsystem", 00:08:23.280 "req_id": 1 00:08:23.280 } 00:08:23.280 Got JSON-RPC error response 00:08:23.280 response: 00:08:23.280 { 00:08:23.280 "code": -32602, 00:08:23.280 "message": "Invalid cntlid range [1-0]" 00:08:23.280 }' 00:08:23.280 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:23.280 { 00:08:23.280 "nqn": "nqn.2016-06.io.spdk:cnode32236", 00:08:23.280 "max_cntlid": 0, 00:08:23.280 "method": "nvmf_create_subsystem", 00:08:23.280 "req_id": 1 00:08:23.280 } 00:08:23.280 Got JSON-RPC error response 00:08:23.280 response: 00:08:23.280 { 00:08:23.280 "code": -32602, 00:08:23.280 "message": "Invalid cntlid range [1-0]" 00:08:23.280 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:23.280 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12587 -I 65520 00:08:23.538 [2024-05-15 00:23:49.452659] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12587: invalid cntlid range [1-65520] 00:08:23.538 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:23.538 { 00:08:23.538 "nqn": "nqn.2016-06.io.spdk:cnode12587", 00:08:23.538 "max_cntlid": 65520, 00:08:23.538 "method": "nvmf_create_subsystem", 00:08:23.538 "req_id": 1 00:08:23.538 } 00:08:23.538 Got JSON-RPC error response 00:08:23.538 response: 00:08:23.538 { 00:08:23.538 "code": -32602, 00:08:23.538 "message": "Invalid cntlid range [1-65520]" 00:08:23.538 }' 00:08:23.538 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:23.538 { 00:08:23.538 "nqn": "nqn.2016-06.io.spdk:cnode12587", 00:08:23.538 "max_cntlid": 65520, 00:08:23.538 "method": "nvmf_create_subsystem", 00:08:23.538 "req_id": 1 00:08:23.538 } 00:08:23.538 Got JSON-RPC error response 00:08:23.538 response: 00:08:23.538 { 00:08:23.538 "code": -32602, 00:08:23.538 "message": "Invalid cntlid range [1-65520]" 00:08:23.538 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:23.538 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26043 -i 6 -I 5 00:08:23.538 [2024-05-15 00:23:49.693457] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26043: invalid cntlid range [6-5] 00:08:23.796 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:23.796 { 00:08:23.796 "nqn": "nqn.2016-06.io.spdk:cnode26043", 00:08:23.796 "min_cntlid": 6, 00:08:23.796 "max_cntlid": 5, 00:08:23.796 "method": "nvmf_create_subsystem", 00:08:23.796 "req_id": 1 00:08:23.796 } 00:08:23.796 Got JSON-RPC error response 00:08:23.796 response: 00:08:23.796 { 00:08:23.796 "code": -32602, 00:08:23.796 "message": "Invalid cntlid range [6-5]" 00:08:23.796 }' 00:08:23.796 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:23.796 { 00:08:23.796 "nqn": "nqn.2016-06.io.spdk:cnode26043", 00:08:23.796 "min_cntlid": 6, 00:08:23.796 "max_cntlid": 5, 00:08:23.796 "method": "nvmf_create_subsystem", 00:08:23.796 "req_id": 1 00:08:23.796 } 00:08:23.797 Got JSON-RPC error response 00:08:23.797 response: 00:08:23.797 { 00:08:23.797 "code": -32602, 00:08:23.797 "message": "Invalid cntlid range [6-5]" 00:08:23.797 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:23.797 { 00:08:23.797 "name": "foobar", 00:08:23.797 "method": "nvmf_delete_target", 00:08:23.797 "req_id": 1 00:08:23.797 } 00:08:23.797 Got JSON-RPC error response 00:08:23.797 response: 00:08:23.797 { 00:08:23.797 "code": -32602, 00:08:23.797 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:23.797 }' 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:23.797 { 00:08:23.797 "name": "foobar", 00:08:23.797 "method": "nvmf_delete_target", 00:08:23.797 "req_id": 1 00:08:23.797 } 00:08:23.797 Got JSON-RPC error response 00:08:23.797 response: 00:08:23.797 { 00:08:23.797 "code": -32602, 00:08:23.797 "message": "The specified target doesn't exist, cannot delete it." 00:08:23.797 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.797 rmmod nvme_tcp 00:08:23.797 rmmod nvme_fabrics 00:08:23.797 rmmod nvme_keyring 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 791573 ']' 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 791573 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@947 -- # '[' -z 791573 ']' 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # kill -0 791573 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # uname 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 791573 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # echo 'killing process with pid 791573' 00:08:23.797 killing process with pid 791573 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # kill 791573 00:08:23.797 [2024-05-15 00:23:49.919963] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:23.797 00:23:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@971 -- # wait 791573 00:08:24.056 00:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:24.056 00:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:24.056 00:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:24.056 00:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:24.056 00:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:24.056 00:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.056 00:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.056 00:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.594 00:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:26.594 00:08:26.594 real 0m9.268s 00:08:26.594 user 0m20.271s 00:08:26.594 sys 0m2.883s 00:08:26.594 00:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:26.594 00:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:26.594 ************************************ 00:08:26.594 END TEST nvmf_invalid 00:08:26.594 ************************************ 00:08:26.594 00:23:52 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:26.594 00:23:52 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:26.594 00:23:52 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:26.594 00:23:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:26.594 ************************************ 00:08:26.594 START TEST nvmf_abort 00:08:26.594 ************************************ 00:08:26.594 00:23:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:26.594 * Looking for test storage... 00:08:26.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.594 00:23:52 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.594 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:26.594 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.594 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:26.595 00:23:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:29.121 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.121 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:29.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:29.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:29.122 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.122 00:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:08:29.122 00:08:29.122 --- 10.0.0.2 ping statistics --- 00:08:29.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.122 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:08:29.122 00:08:29.122 --- 10.0.0.1 ping statistics --- 00:08:29.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.122 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=794507 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 794507 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # '[' -z 794507 ']' 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:29.122 00:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.122 [2024-05-15 00:23:55.081821] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:08:29.122 [2024-05-15 00:23:55.081921] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.122 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.122 [2024-05-15 00:23:55.165125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:29.380 [2024-05-15 00:23:55.287193] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.380 [2024-05-15 00:23:55.287249] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.380 [2024-05-15 00:23:55.287265] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.380 [2024-05-15 00:23:55.287278] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.380 [2024-05-15 00:23:55.287290] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.380 [2024-05-15 00:23:55.287370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.380 [2024-05-15 00:23:55.287428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.380 [2024-05-15 00:23:55.287425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@861 -- # return 0 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.944 [2024-05-15 00:23:56.089050] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:29.944 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.202 Malloc0 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.202 Delay0 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.202 [2024-05-15 00:23:56.158033] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:30.202 [2024-05-15 00:23:56.158365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:30.202 00:23:56 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:30.202 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.202 [2024-05-15 00:23:56.265387] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:32.731 Initializing NVMe Controllers 00:08:32.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:32.731 controller IO queue size 128 less than required 00:08:32.731 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:32.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:32.731 Initialization complete. Launching workers. 00:08:32.731 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32702 00:08:32.731 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32763, failed to submit 62 00:08:32.731 success 32706, unsuccess 57, failed 0 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:32.731 rmmod nvme_tcp 00:08:32.731 rmmod nvme_fabrics 00:08:32.731 rmmod nvme_keyring 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 794507 ']' 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 794507 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' -z 794507 ']' 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # kill -0 794507 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # uname 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 794507 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 794507' 00:08:32.731 killing process with pid 794507 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # kill 794507 00:08:32.731 [2024-05-15 00:23:58.402071] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@971 -- # wait 794507 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.731 00:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.633 00:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.633 00:08:34.633 real 0m8.453s 00:08:34.633 user 0m12.737s 00:08:34.633 sys 0m2.958s 00:08:34.633 00:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:34.633 00:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.633 ************************************ 00:08:34.633 END TEST nvmf_abort 00:08:34.633 ************************************ 00:08:34.633 00:24:00 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:34.633 00:24:00 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:34.633 00:24:00 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:34.633 00:24:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.892 ************************************ 00:08:34.892 START TEST nvmf_ns_hotplug_stress 00:08:34.892 ************************************ 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:34.892 * Looking for test storage... 00:08:34.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.892 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.893 00:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.432 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:37.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:37.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:37.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.433 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:37.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:37.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:08:37.434 00:08:37.434 --- 10.0.0.2 ping statistics --- 00:08:37.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.434 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:08:37.434 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:08:37.698 00:08:37.698 --- 10.0.0.1 ping statistics --- 00:08:37.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.698 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=797272 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 797272 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # '[' -z 797272 ']' 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:37.698 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.698 [2024-05-15 00:24:03.668382] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:08:37.698 [2024-05-15 00:24:03.668477] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.698 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.698 [2024-05-15 00:24:03.759459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.957 [2024-05-15 00:24:03.874129] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.957 [2024-05-15 00:24:03.874177] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.957 [2024-05-15 00:24:03.874192] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.957 [2024-05-15 00:24:03.874218] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.957 [2024-05-15 00:24:03.874228] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.957 [2024-05-15 00:24:03.874332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.957 [2024-05-15 00:24:03.874395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.957 [2024-05-15 00:24:03.874398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.957 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:37.957 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # return 0 00:08:37.957 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.957 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:37.957 00:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.957 00:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.957 00:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:37.957 00:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:38.225 [2024-05-15 00:24:04.279674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.225 00:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:38.548 00:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.806 [2024-05-15 00:24:04.826456] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:38.806 [2024-05-15 00:24:04.826686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.806 00:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.064 00:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:39.322 Malloc0 00:08:39.323 00:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:39.580 Delay0 00:08:39.580 00:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.838 00:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:40.096 NULL1 00:08:40.096 00:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:40.354 00:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=797690 00:08:40.354 00:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:40.354 00:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.354 00:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:40.354 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.728 Read completed with error (sct=0, sc=11) 00:08:41.728 00:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.987 00:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:41.987 00:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:42.245 true 00:08:42.245 00:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:42.245 00:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.811 00:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.069 00:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:43.069 00:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:43.326 true 00:08:43.327 00:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:43.327 00:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.584 00:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.842 00:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:43.842 00:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:44.100 true 00:08:44.100 00:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:44.100 00:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.358 00:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.616 00:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:44.616 00:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:44.874 true 00:08:44.874 00:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:44.874 00:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.808 00:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.065 00:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:46.066 00:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:46.324 true 00:08:46.324 00:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:46.324 00:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.582 00:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.840 00:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:46.840 00:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:47.099 true 00:08:47.099 00:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:47.099 00:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.034 00:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.292 00:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:48.292 00:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:48.550 true 00:08:48.550 00:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:48.550 00:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.807 00:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.065 00:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:49.065 00:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:49.323 true 00:08:49.323 00:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:49.323 00:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.256 00:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.514 00:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:50.514 00:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:50.771 true 00:08:50.771 00:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:50.771 00:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.029 00:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.287 00:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:51.287 00:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:51.544 true 00:08:51.545 00:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:51.545 00:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.823 00:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.113 00:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:52.113 00:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:52.371 true 00:08:52.371 00:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:52.371 00:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.302 00:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.560 00:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:53.560 00:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:53.816 true 00:08:53.816 00:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:53.816 00:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.748 00:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.005 00:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:55.005 00:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:55.263 true 00:08:55.263 00:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:55.263 00:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.521 00:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.780 00:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:55.780 00:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:56.037 true 00:08:56.037 00:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:56.037 00:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.971 00:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.971 00:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:56.971 00:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:57.228 true 00:08:57.228 00:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:57.228 00:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.495 00:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.754 00:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:57.754 00:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:58.012 true 00:08:58.012 00:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:58.012 00:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.945 00:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.203 00:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:59.203 00:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:59.461 true 00:08:59.461 00:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:08:59.461 00:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.720 00:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.977 00:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:59.977 00:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:00.235 true 00:09:00.235 00:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:09:00.235 00:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.168 00:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.426 00:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:01.426 00:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:01.682 true 00:09:01.683 00:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:09:01.683 00:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.940 00:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.196 00:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:02.197 00:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:02.454 true 00:09:02.454 00:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:09:02.454 00:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.423 00:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.692 00:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:03.692 00:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:03.949 true 00:09:03.949 00:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:09:03.949 00:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.206 00:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.463 00:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:04.463 00:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:04.720 true 00:09:04.720 00:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:09:04.720 00:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.652 00:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.652 00:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:05.652 00:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:05.909 true 00:09:05.909 00:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:09:05.909 00:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.166 00:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.424 00:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:06.424 00:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:06.681 true 00:09:06.681 00:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:09:06.681 00:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.615 00:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.873 00:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:07.873 00:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:08.131 true 00:09:08.131 00:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:09:08.131 00:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.389 00:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.647 00:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:08.647 00:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:08.905 true 00:09:08.905 00:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:09:08.905 00:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.163 00:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.421 00:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:09.421 00:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:09.679 true 00:09:09.679 00:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:09:09.679 00:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.053 Initializing NVMe Controllers 00:09:11.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:11.053 Controller IO queue size 128, less than required. 00:09:11.053 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:11.053 Controller IO queue size 128, less than required. 00:09:11.053 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:11.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:11.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:11.053 Initialization complete. Launching workers. 00:09:11.053 ======================================================== 00:09:11.053 Latency(us) 00:09:11.053 Device Information : IOPS MiB/s Average min max 00:09:11.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 886.33 0.43 75857.81 2586.64 1013006.43 00:09:11.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10995.82 5.37 11640.44 2815.47 367562.50 00:09:11.053 ======================================================== 00:09:11.053 Total : 11882.15 5.80 16430.64 2586.64 1013006.43 00:09:11.053 00:09:11.053 00:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.053 00:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:11.053 00:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:11.311 true 00:09:11.311 00:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 797690 00:09:11.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (797690) - No such process 00:09:11.311 00:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 797690 00:09:11.311 00:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.569 00:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:11.827 00:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:11.827 00:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:11.827 00:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:11.827 00:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.827 00:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:12.086 null0 00:09:12.086 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.086 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.086 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:12.086 null1 00:09:12.345 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.345 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.345 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:12.345 null2 00:09:12.345 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.345 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.345 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:12.602 null3 00:09:12.602 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.602 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.602 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:12.861 null4 00:09:12.861 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.861 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.861 00:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:13.119 null5 00:09:13.119 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.119 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.119 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:13.377 null6 00:09:13.377 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.377 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.377 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:13.636 null7 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 802262 802264 802266 802268 802270 802272 802274 802276 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.636 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:13.894 00:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:13.894 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.894 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:13.894 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.894 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.894 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.894 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.894 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.152 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.410 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.410 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.410 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.410 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.669 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.669 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.669 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.669 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.669 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.669 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.669 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.669 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.669 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.669 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.927 00:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.186 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.186 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.186 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.186 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.186 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.186 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.186 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.186 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.444 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.445 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.445 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.445 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.445 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.445 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.445 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.445 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.703 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.703 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.703 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.703 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.703 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.703 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.703 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.703 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.961 00:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.220 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.220 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.220 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.220 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.220 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.220 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.220 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.220 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.478 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.736 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.736 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.736 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.736 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.736 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.736 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.736 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.736 00:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.995 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.254 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.254 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.254 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.254 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.254 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.254 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.254 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.254 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.512 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.770 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.770 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.770 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.770 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.770 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.770 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.770 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.770 00:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.028 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.029 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.287 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.287 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.287 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.287 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.287 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.287 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.287 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.287 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.546 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.804 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.804 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.804 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.804 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.804 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.804 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.804 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.804 00:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.063 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.321 rmmod nvme_tcp 00:09:19.321 rmmod nvme_fabrics 00:09:19.321 rmmod nvme_keyring 00:09:19.321 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.321 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:19.321 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:19.321 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 797272 ']' 00:09:19.321 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 797272 00:09:19.321 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' -z 797272 ']' 00:09:19.321 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # kill -0 797272 00:09:19.321 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # uname 00:09:19.321 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:19.321 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 797272 00:09:19.321 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:09:19.322 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:09:19.322 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 797272' 00:09:19.322 killing process with pid 797272 00:09:19.322 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # kill 797272 00:09:19.322 [2024-05-15 00:24:45.311603] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:19.322 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # wait 797272 00:09:19.581 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.581 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.581 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.581 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.581 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.581 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.581 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.581 00:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.514 00:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:21.514 00:09:21.514 real 0m46.836s 00:09:21.514 user 3m30.080s 00:09:21.514 sys 0m16.810s 00:09:21.514 00:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:21.514 00:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.514 ************************************ 00:09:21.514 END TEST nvmf_ns_hotplug_stress 00:09:21.514 ************************************ 00:09:21.514 00:24:47 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:21.514 00:24:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:21.514 00:24:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:21.514 00:24:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.774 ************************************ 00:09:21.774 START TEST nvmf_connect_stress 00:09:21.774 ************************************ 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:21.774 * Looking for test storage... 00:09:21.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:21.774 00:24:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.305 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:24.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:24.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:24.306 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:24.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:09:24.306 00:09:24.306 --- 10.0.0.2 ping statistics --- 00:09:24.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.306 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:09:24.306 00:09:24.306 --- 10.0.0.1 ping statistics --- 00:09:24.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.306 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=805323 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 805323 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # '[' -z 805323 ']' 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:24.306 00:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.307 00:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:24.307 00:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.307 [2024-05-15 00:24:50.381486] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:09:24.307 [2024-05-15 00:24:50.381563] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.307 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.307 [2024-05-15 00:24:50.461722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.565 [2024-05-15 00:24:50.580403] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.565 [2024-05-15 00:24:50.580461] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.565 [2024-05-15 00:24:50.580474] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.565 [2024-05-15 00:24:50.580484] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.565 [2024-05-15 00:24:50.580494] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.565 [2024-05-15 00:24:50.580580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.565 [2024-05-15 00:24:50.580622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.565 [2024-05-15 00:24:50.580624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.499 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:25.499 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@861 -- # return 0 00:09:25.499 00:24:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.499 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:25.499 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.499 00:24:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.499 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.499 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:25.499 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.499 [2024-05-15 00:24:51.352841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.499 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.500 [2024-05-15 00:24:51.369673] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:25.500 [2024-05-15 00:24:51.384054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.500 NULL1 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=805477 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:25.500 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.758 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:25.758 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:25.758 00:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.758 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:25.758 00:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.016 00:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:26.016 00:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:26.016 00:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.016 00:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:26.016 00:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.274 00:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:26.274 00:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:26.274 00:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.274 00:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:26.274 00:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.840 00:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:26.840 00:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:26.840 00:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.840 00:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:26.840 00:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.098 00:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:27.098 00:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:27.098 00:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.098 00:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:27.098 00:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.356 00:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:27.356 00:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:27.356 00:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.356 00:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:27.356 00:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.613 00:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:27.613 00:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:27.613 00:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.613 00:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:27.613 00:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.871 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:27.871 00:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:27.871 00:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.871 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:27.871 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.437 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.437 00:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:28.437 00:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.437 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.437 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.695 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.695 00:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:28.695 00:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.695 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.695 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.953 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.953 00:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:28.953 00:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.953 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.953 00:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.210 00:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:29.210 00:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:29.210 00:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.210 00:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:29.210 00:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.468 00:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:29.468 00:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:29.468 00:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.468 00:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:29.468 00:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.034 00:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.034 00:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:30.034 00:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.034 00:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.034 00:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.292 00:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.292 00:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:30.292 00:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.292 00:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.292 00:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.550 00:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.550 00:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:30.550 00:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.550 00:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.550 00:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.808 00:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:30.808 00:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:30.808 00:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.808 00:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:30.808 00:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.066 00:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:31.066 00:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:31.066 00:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.066 00:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:31.066 00:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.631 00:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:31.631 00:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:31.631 00:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.631 00:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:31.631 00:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.890 00:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:31.890 00:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:31.890 00:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.890 00:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:31.890 00:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.148 00:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.148 00:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:32.148 00:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.148 00:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.148 00:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.406 00:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.406 00:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:32.406 00:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.406 00:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.406 00:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.971 00:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.971 00:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:32.971 00:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:32.971 00:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.971 00:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.229 00:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.229 00:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:33.229 00:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.229 00:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.229 00:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.487 00:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.487 00:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:33.487 00:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.487 00:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.487 00:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.745 00:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:33.745 00:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:33.745 00:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:33.745 00:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:33.745 00:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.003 00:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:34.003 00:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:34.003 00:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.003 00:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:34.003 00:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.567 00:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:34.567 00:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:34.567 00:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.567 00:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:34.567 00:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:34.824 00:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:34.824 00:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:34.824 00:25:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:34.824 00:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:34.824 00:25:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.082 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.082 00:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:35.082 00:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.082 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.082 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.369 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.369 00:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:35.369 00:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:35.369 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.369 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.369 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 805477 00:09:35.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (805477) - No such process 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 805477 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.627 rmmod nvme_tcp 00:09:35.627 rmmod nvme_fabrics 00:09:35.627 rmmod nvme_keyring 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 805323 ']' 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 805323 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' -z 805323 ']' 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # kill -0 805323 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # uname 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:35.627 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 805323 00:09:35.885 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:09:35.885 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:09:35.885 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 805323' 00:09:35.885 killing process with pid 805323 00:09:35.885 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # kill 805323 00:09:35.885 [2024-05-15 00:25:01.805042] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:35.885 00:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@971 -- # wait 805323 00:09:36.142 00:25:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.142 00:25:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.142 00:25:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.142 00:25:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.142 00:25:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.142 00:25:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.142 00:25:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.142 00:25:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.045 00:25:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.045 00:09:38.045 real 0m16.421s 00:09:38.045 user 0m40.338s 00:09:38.045 sys 0m6.331s 00:09:38.045 00:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:38.045 00:25:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.045 ************************************ 00:09:38.045 END TEST nvmf_connect_stress 00:09:38.045 ************************************ 00:09:38.045 00:25:04 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:38.045 00:25:04 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:38.045 00:25:04 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:38.045 00:25:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:38.045 ************************************ 00:09:38.045 START TEST nvmf_fused_ordering 00:09:38.045 ************************************ 00:09:38.045 00:25:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:38.306 * Looking for test storage... 00:09:38.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.306 00:25:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:40.838 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.838 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:40.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:40.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:40.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:40.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:09:40.839 00:09:40.839 --- 10.0.0.2 ping statistics --- 00:09:40.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.839 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:09:40.839 00:09:40.839 --- 10.0.0.1 ping statistics --- 00:09:40.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.839 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=809034 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 809034 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # '[' -z 809034 ']' 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:40.839 00:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:40.839 [2024-05-15 00:25:06.901861] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:09:40.839 [2024-05-15 00:25:06.901972] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.839 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.839 [2024-05-15 00:25:06.977784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.098 [2024-05-15 00:25:07.088409] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.098 [2024-05-15 00:25:07.088466] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.098 [2024-05-15 00:25:07.088493] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.098 [2024-05-15 00:25:07.088504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.098 [2024-05-15 00:25:07.088513] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.098 [2024-05-15 00:25:07.088541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # return 0 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:41.098 [2024-05-15 00:25:07.238206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:41.098 [2024-05-15 00:25:07.254164] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:41.098 [2024-05-15 00:25:07.254460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.098 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:41.357 NULL1 00:09:41.357 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.357 00:25:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:41.357 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.357 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:41.357 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.357 00:25:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:41.357 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.357 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:41.357 00:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.357 00:25:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:41.357 [2024-05-15 00:25:07.298851] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:09:41.357 [2024-05-15 00:25:07.298890] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid809059 ] 00:09:41.357 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.291 Attached to nqn.2016-06.io.spdk:cnode1 00:09:42.291 Namespace ID: 1 size: 1GB 00:09:42.291 fused_ordering(0) 00:09:42.291 fused_ordering(1) 00:09:42.291 fused_ordering(2) 00:09:42.291 fused_ordering(3) 00:09:42.291 fused_ordering(4) 00:09:42.291 fused_ordering(5) 00:09:42.291 fused_ordering(6) 00:09:42.291 fused_ordering(7) 00:09:42.291 fused_ordering(8) 00:09:42.291 fused_ordering(9) 00:09:42.291 fused_ordering(10) 00:09:42.291 fused_ordering(11) 00:09:42.291 fused_ordering(12) 00:09:42.291 fused_ordering(13) 00:09:42.291 fused_ordering(14) 00:09:42.291 fused_ordering(15) 00:09:42.291 fused_ordering(16) 00:09:42.291 fused_ordering(17) 00:09:42.291 fused_ordering(18) 00:09:42.291 fused_ordering(19) 00:09:42.291 fused_ordering(20) 00:09:42.291 fused_ordering(21) 00:09:42.291 fused_ordering(22) 00:09:42.291 fused_ordering(23) 00:09:42.291 fused_ordering(24) 00:09:42.291 fused_ordering(25) 00:09:42.291 fused_ordering(26) 00:09:42.291 fused_ordering(27) 00:09:42.291 fused_ordering(28) 00:09:42.291 fused_ordering(29) 00:09:42.291 fused_ordering(30) 00:09:42.291 fused_ordering(31) 00:09:42.291 fused_ordering(32) 00:09:42.291 fused_ordering(33) 00:09:42.291 fused_ordering(34) 00:09:42.291 fused_ordering(35) 00:09:42.291 fused_ordering(36) 00:09:42.291 fused_ordering(37) 00:09:42.291 fused_ordering(38) 00:09:42.291 fused_ordering(39) 00:09:42.291 fused_ordering(40) 00:09:42.291 fused_ordering(41) 00:09:42.291 fused_ordering(42) 00:09:42.291 fused_ordering(43) 00:09:42.291 fused_ordering(44) 00:09:42.291 fused_ordering(45) 00:09:42.291 fused_ordering(46) 00:09:42.291 fused_ordering(47) 00:09:42.291 fused_ordering(48) 00:09:42.291 fused_ordering(49) 00:09:42.291 fused_ordering(50) 00:09:42.291 fused_ordering(51) 00:09:42.291 fused_ordering(52) 00:09:42.291 fused_ordering(53) 00:09:42.291 fused_ordering(54) 00:09:42.291 fused_ordering(55) 00:09:42.291 fused_ordering(56) 00:09:42.291 fused_ordering(57) 00:09:42.291 fused_ordering(58) 00:09:42.291 fused_ordering(59) 00:09:42.291 fused_ordering(60) 00:09:42.291 fused_ordering(61) 00:09:42.291 fused_ordering(62) 00:09:42.291 fused_ordering(63) 00:09:42.291 fused_ordering(64) 00:09:42.291 fused_ordering(65) 00:09:42.291 fused_ordering(66) 00:09:42.291 fused_ordering(67) 00:09:42.291 fused_ordering(68) 00:09:42.291 fused_ordering(69) 00:09:42.291 fused_ordering(70) 00:09:42.291 fused_ordering(71) 00:09:42.291 fused_ordering(72) 00:09:42.291 fused_ordering(73) 00:09:42.291 fused_ordering(74) 00:09:42.291 fused_ordering(75) 00:09:42.291 fused_ordering(76) 00:09:42.291 fused_ordering(77) 00:09:42.291 fused_ordering(78) 00:09:42.291 fused_ordering(79) 00:09:42.291 fused_ordering(80) 00:09:42.291 fused_ordering(81) 00:09:42.291 fused_ordering(82) 00:09:42.291 fused_ordering(83) 00:09:42.291 fused_ordering(84) 00:09:42.291 fused_ordering(85) 00:09:42.291 fused_ordering(86) 00:09:42.291 fused_ordering(87) 00:09:42.291 fused_ordering(88) 00:09:42.291 fused_ordering(89) 00:09:42.291 fused_ordering(90) 00:09:42.291 fused_ordering(91) 00:09:42.291 fused_ordering(92) 00:09:42.291 fused_ordering(93) 00:09:42.291 fused_ordering(94) 00:09:42.291 fused_ordering(95) 00:09:42.291 fused_ordering(96) 00:09:42.291 fused_ordering(97) 00:09:42.291 fused_ordering(98) 00:09:42.291 fused_ordering(99) 00:09:42.291 fused_ordering(100) 00:09:42.291 fused_ordering(101) 00:09:42.291 fused_ordering(102) 00:09:42.291 fused_ordering(103) 00:09:42.291 fused_ordering(104) 00:09:42.291 fused_ordering(105) 00:09:42.291 fused_ordering(106) 00:09:42.291 fused_ordering(107) 00:09:42.291 fused_ordering(108) 00:09:42.291 fused_ordering(109) 00:09:42.291 fused_ordering(110) 00:09:42.291 fused_ordering(111) 00:09:42.291 fused_ordering(112) 00:09:42.291 fused_ordering(113) 00:09:42.291 fused_ordering(114) 00:09:42.291 fused_ordering(115) 00:09:42.291 fused_ordering(116) 00:09:42.291 fused_ordering(117) 00:09:42.291 fused_ordering(118) 00:09:42.291 fused_ordering(119) 00:09:42.291 fused_ordering(120) 00:09:42.291 fused_ordering(121) 00:09:42.291 fused_ordering(122) 00:09:42.291 fused_ordering(123) 00:09:42.291 fused_ordering(124) 00:09:42.291 fused_ordering(125) 00:09:42.291 fused_ordering(126) 00:09:42.291 fused_ordering(127) 00:09:42.291 fused_ordering(128) 00:09:42.291 fused_ordering(129) 00:09:42.291 fused_ordering(130) 00:09:42.291 fused_ordering(131) 00:09:42.291 fused_ordering(132) 00:09:42.291 fused_ordering(133) 00:09:42.291 fused_ordering(134) 00:09:42.291 fused_ordering(135) 00:09:42.291 fused_ordering(136) 00:09:42.291 fused_ordering(137) 00:09:42.291 fused_ordering(138) 00:09:42.291 fused_ordering(139) 00:09:42.291 fused_ordering(140) 00:09:42.291 fused_ordering(141) 00:09:42.291 fused_ordering(142) 00:09:42.291 fused_ordering(143) 00:09:42.291 fused_ordering(144) 00:09:42.291 fused_ordering(145) 00:09:42.291 fused_ordering(146) 00:09:42.291 fused_ordering(147) 00:09:42.291 fused_ordering(148) 00:09:42.291 fused_ordering(149) 00:09:42.291 fused_ordering(150) 00:09:42.291 fused_ordering(151) 00:09:42.291 fused_ordering(152) 00:09:42.291 fused_ordering(153) 00:09:42.291 fused_ordering(154) 00:09:42.291 fused_ordering(155) 00:09:42.291 fused_ordering(156) 00:09:42.291 fused_ordering(157) 00:09:42.291 fused_ordering(158) 00:09:42.291 fused_ordering(159) 00:09:42.291 fused_ordering(160) 00:09:42.291 fused_ordering(161) 00:09:42.291 fused_ordering(162) 00:09:42.291 fused_ordering(163) 00:09:42.291 fused_ordering(164) 00:09:42.291 fused_ordering(165) 00:09:42.291 fused_ordering(166) 00:09:42.292 fused_ordering(167) 00:09:42.292 fused_ordering(168) 00:09:42.292 fused_ordering(169) 00:09:42.292 fused_ordering(170) 00:09:42.292 fused_ordering(171) 00:09:42.292 fused_ordering(172) 00:09:42.292 fused_ordering(173) 00:09:42.292 fused_ordering(174) 00:09:42.292 fused_ordering(175) 00:09:42.292 fused_ordering(176) 00:09:42.292 fused_ordering(177) 00:09:42.292 fused_ordering(178) 00:09:42.292 fused_ordering(179) 00:09:42.292 fused_ordering(180) 00:09:42.292 fused_ordering(181) 00:09:42.292 fused_ordering(182) 00:09:42.292 fused_ordering(183) 00:09:42.292 fused_ordering(184) 00:09:42.292 fused_ordering(185) 00:09:42.292 fused_ordering(186) 00:09:42.292 fused_ordering(187) 00:09:42.292 fused_ordering(188) 00:09:42.292 fused_ordering(189) 00:09:42.292 fused_ordering(190) 00:09:42.292 fused_ordering(191) 00:09:42.292 fused_ordering(192) 00:09:42.292 fused_ordering(193) 00:09:42.292 fused_ordering(194) 00:09:42.292 fused_ordering(195) 00:09:42.292 fused_ordering(196) 00:09:42.292 fused_ordering(197) 00:09:42.292 fused_ordering(198) 00:09:42.292 fused_ordering(199) 00:09:42.292 fused_ordering(200) 00:09:42.292 fused_ordering(201) 00:09:42.292 fused_ordering(202) 00:09:42.292 fused_ordering(203) 00:09:42.292 fused_ordering(204) 00:09:42.292 fused_ordering(205) 00:09:42.857 fused_ordering(206) 00:09:42.857 fused_ordering(207) 00:09:42.857 fused_ordering(208) 00:09:42.857 fused_ordering(209) 00:09:42.857 fused_ordering(210) 00:09:42.857 fused_ordering(211) 00:09:42.857 fused_ordering(212) 00:09:42.857 fused_ordering(213) 00:09:42.857 fused_ordering(214) 00:09:42.857 fused_ordering(215) 00:09:42.857 fused_ordering(216) 00:09:42.857 fused_ordering(217) 00:09:42.857 fused_ordering(218) 00:09:42.857 fused_ordering(219) 00:09:42.857 fused_ordering(220) 00:09:42.857 fused_ordering(221) 00:09:42.857 fused_ordering(222) 00:09:42.857 fused_ordering(223) 00:09:42.857 fused_ordering(224) 00:09:42.857 fused_ordering(225) 00:09:42.857 fused_ordering(226) 00:09:42.857 fused_ordering(227) 00:09:42.857 fused_ordering(228) 00:09:42.857 fused_ordering(229) 00:09:42.857 fused_ordering(230) 00:09:42.857 fused_ordering(231) 00:09:42.857 fused_ordering(232) 00:09:42.857 fused_ordering(233) 00:09:42.858 fused_ordering(234) 00:09:42.858 fused_ordering(235) 00:09:42.858 fused_ordering(236) 00:09:42.858 fused_ordering(237) 00:09:42.858 fused_ordering(238) 00:09:42.858 fused_ordering(239) 00:09:42.858 fused_ordering(240) 00:09:42.858 fused_ordering(241) 00:09:42.858 fused_ordering(242) 00:09:42.858 fused_ordering(243) 00:09:42.858 fused_ordering(244) 00:09:42.858 fused_ordering(245) 00:09:42.858 fused_ordering(246) 00:09:42.858 fused_ordering(247) 00:09:42.858 fused_ordering(248) 00:09:42.858 fused_ordering(249) 00:09:42.858 fused_ordering(250) 00:09:42.858 fused_ordering(251) 00:09:42.858 fused_ordering(252) 00:09:42.858 fused_ordering(253) 00:09:42.858 fused_ordering(254) 00:09:42.858 fused_ordering(255) 00:09:42.858 fused_ordering(256) 00:09:42.858 fused_ordering(257) 00:09:42.858 fused_ordering(258) 00:09:42.858 fused_ordering(259) 00:09:42.858 fused_ordering(260) 00:09:42.858 fused_ordering(261) 00:09:42.858 fused_ordering(262) 00:09:42.858 fused_ordering(263) 00:09:42.858 fused_ordering(264) 00:09:42.858 fused_ordering(265) 00:09:42.858 fused_ordering(266) 00:09:42.858 fused_ordering(267) 00:09:42.858 fused_ordering(268) 00:09:42.858 fused_ordering(269) 00:09:42.858 fused_ordering(270) 00:09:42.858 fused_ordering(271) 00:09:42.858 fused_ordering(272) 00:09:42.858 fused_ordering(273) 00:09:42.858 fused_ordering(274) 00:09:42.858 fused_ordering(275) 00:09:42.858 fused_ordering(276) 00:09:42.858 fused_ordering(277) 00:09:42.858 fused_ordering(278) 00:09:42.858 fused_ordering(279) 00:09:42.858 fused_ordering(280) 00:09:42.858 fused_ordering(281) 00:09:42.858 fused_ordering(282) 00:09:42.858 fused_ordering(283) 00:09:42.858 fused_ordering(284) 00:09:42.858 fused_ordering(285) 00:09:42.858 fused_ordering(286) 00:09:42.858 fused_ordering(287) 00:09:42.858 fused_ordering(288) 00:09:42.858 fused_ordering(289) 00:09:42.858 fused_ordering(290) 00:09:42.858 fused_ordering(291) 00:09:42.858 fused_ordering(292) 00:09:42.858 fused_ordering(293) 00:09:42.858 fused_ordering(294) 00:09:42.858 fused_ordering(295) 00:09:42.858 fused_ordering(296) 00:09:42.858 fused_ordering(297) 00:09:42.858 fused_ordering(298) 00:09:42.858 fused_ordering(299) 00:09:42.858 fused_ordering(300) 00:09:42.858 fused_ordering(301) 00:09:42.858 fused_ordering(302) 00:09:42.858 fused_ordering(303) 00:09:42.858 fused_ordering(304) 00:09:42.858 fused_ordering(305) 00:09:42.858 fused_ordering(306) 00:09:42.858 fused_ordering(307) 00:09:42.858 fused_ordering(308) 00:09:42.858 fused_ordering(309) 00:09:42.858 fused_ordering(310) 00:09:42.858 fused_ordering(311) 00:09:42.858 fused_ordering(312) 00:09:42.858 fused_ordering(313) 00:09:42.858 fused_ordering(314) 00:09:42.858 fused_ordering(315) 00:09:42.858 fused_ordering(316) 00:09:42.858 fused_ordering(317) 00:09:42.858 fused_ordering(318) 00:09:42.858 fused_ordering(319) 00:09:42.858 fused_ordering(320) 00:09:42.858 fused_ordering(321) 00:09:42.858 fused_ordering(322) 00:09:42.858 fused_ordering(323) 00:09:42.858 fused_ordering(324) 00:09:42.858 fused_ordering(325) 00:09:42.858 fused_ordering(326) 00:09:42.858 fused_ordering(327) 00:09:42.858 fused_ordering(328) 00:09:42.858 fused_ordering(329) 00:09:42.858 fused_ordering(330) 00:09:42.858 fused_ordering(331) 00:09:42.858 fused_ordering(332) 00:09:42.858 fused_ordering(333) 00:09:42.858 fused_ordering(334) 00:09:42.858 fused_ordering(335) 00:09:42.858 fused_ordering(336) 00:09:42.858 fused_ordering(337) 00:09:42.858 fused_ordering(338) 00:09:42.858 fused_ordering(339) 00:09:42.858 fused_ordering(340) 00:09:42.858 fused_ordering(341) 00:09:42.858 fused_ordering(342) 00:09:42.858 fused_ordering(343) 00:09:42.858 fused_ordering(344) 00:09:42.858 fused_ordering(345) 00:09:42.858 fused_ordering(346) 00:09:42.858 fused_ordering(347) 00:09:42.858 fused_ordering(348) 00:09:42.858 fused_ordering(349) 00:09:42.858 fused_ordering(350) 00:09:42.858 fused_ordering(351) 00:09:42.858 fused_ordering(352) 00:09:42.858 fused_ordering(353) 00:09:42.858 fused_ordering(354) 00:09:42.858 fused_ordering(355) 00:09:42.858 fused_ordering(356) 00:09:42.858 fused_ordering(357) 00:09:42.858 fused_ordering(358) 00:09:42.858 fused_ordering(359) 00:09:42.858 fused_ordering(360) 00:09:42.858 fused_ordering(361) 00:09:42.858 fused_ordering(362) 00:09:42.858 fused_ordering(363) 00:09:42.858 fused_ordering(364) 00:09:42.858 fused_ordering(365) 00:09:42.858 fused_ordering(366) 00:09:42.858 fused_ordering(367) 00:09:42.858 fused_ordering(368) 00:09:42.858 fused_ordering(369) 00:09:42.858 fused_ordering(370) 00:09:42.858 fused_ordering(371) 00:09:42.858 fused_ordering(372) 00:09:42.858 fused_ordering(373) 00:09:42.858 fused_ordering(374) 00:09:42.858 fused_ordering(375) 00:09:42.858 fused_ordering(376) 00:09:42.858 fused_ordering(377) 00:09:42.858 fused_ordering(378) 00:09:42.858 fused_ordering(379) 00:09:42.858 fused_ordering(380) 00:09:42.858 fused_ordering(381) 00:09:42.858 fused_ordering(382) 00:09:42.858 fused_ordering(383) 00:09:42.858 fused_ordering(384) 00:09:42.858 fused_ordering(385) 00:09:42.858 fused_ordering(386) 00:09:42.858 fused_ordering(387) 00:09:42.858 fused_ordering(388) 00:09:42.858 fused_ordering(389) 00:09:42.858 fused_ordering(390) 00:09:42.858 fused_ordering(391) 00:09:42.858 fused_ordering(392) 00:09:42.858 fused_ordering(393) 00:09:42.858 fused_ordering(394) 00:09:42.858 fused_ordering(395) 00:09:42.858 fused_ordering(396) 00:09:42.858 fused_ordering(397) 00:09:42.858 fused_ordering(398) 00:09:42.858 fused_ordering(399) 00:09:42.858 fused_ordering(400) 00:09:42.858 fused_ordering(401) 00:09:42.858 fused_ordering(402) 00:09:42.858 fused_ordering(403) 00:09:42.858 fused_ordering(404) 00:09:42.858 fused_ordering(405) 00:09:42.858 fused_ordering(406) 00:09:42.858 fused_ordering(407) 00:09:42.858 fused_ordering(408) 00:09:42.858 fused_ordering(409) 00:09:42.858 fused_ordering(410) 00:09:43.425 fused_ordering(411) 00:09:43.425 fused_ordering(412) 00:09:43.425 fused_ordering(413) 00:09:43.425 fused_ordering(414) 00:09:43.425 fused_ordering(415) 00:09:43.425 fused_ordering(416) 00:09:43.425 fused_ordering(417) 00:09:43.425 fused_ordering(418) 00:09:43.425 fused_ordering(419) 00:09:43.425 fused_ordering(420) 00:09:43.425 fused_ordering(421) 00:09:43.425 fused_ordering(422) 00:09:43.425 fused_ordering(423) 00:09:43.425 fused_ordering(424) 00:09:43.425 fused_ordering(425) 00:09:43.425 fused_ordering(426) 00:09:43.425 fused_ordering(427) 00:09:43.425 fused_ordering(428) 00:09:43.425 fused_ordering(429) 00:09:43.425 fused_ordering(430) 00:09:43.425 fused_ordering(431) 00:09:43.425 fused_ordering(432) 00:09:43.425 fused_ordering(433) 00:09:43.425 fused_ordering(434) 00:09:43.425 fused_ordering(435) 00:09:43.425 fused_ordering(436) 00:09:43.425 fused_ordering(437) 00:09:43.425 fused_ordering(438) 00:09:43.425 fused_ordering(439) 00:09:43.425 fused_ordering(440) 00:09:43.425 fused_ordering(441) 00:09:43.425 fused_ordering(442) 00:09:43.425 fused_ordering(443) 00:09:43.425 fused_ordering(444) 00:09:43.425 fused_ordering(445) 00:09:43.425 fused_ordering(446) 00:09:43.425 fused_ordering(447) 00:09:43.425 fused_ordering(448) 00:09:43.425 fused_ordering(449) 00:09:43.425 fused_ordering(450) 00:09:43.425 fused_ordering(451) 00:09:43.425 fused_ordering(452) 00:09:43.425 fused_ordering(453) 00:09:43.425 fused_ordering(454) 00:09:43.425 fused_ordering(455) 00:09:43.425 fused_ordering(456) 00:09:43.425 fused_ordering(457) 00:09:43.425 fused_ordering(458) 00:09:43.425 fused_ordering(459) 00:09:43.425 fused_ordering(460) 00:09:43.425 fused_ordering(461) 00:09:43.425 fused_ordering(462) 00:09:43.425 fused_ordering(463) 00:09:43.425 fused_ordering(464) 00:09:43.425 fused_ordering(465) 00:09:43.425 fused_ordering(466) 00:09:43.425 fused_ordering(467) 00:09:43.425 fused_ordering(468) 00:09:43.425 fused_ordering(469) 00:09:43.425 fused_ordering(470) 00:09:43.425 fused_ordering(471) 00:09:43.425 fused_ordering(472) 00:09:43.425 fused_ordering(473) 00:09:43.425 fused_ordering(474) 00:09:43.425 fused_ordering(475) 00:09:43.425 fused_ordering(476) 00:09:43.425 fused_ordering(477) 00:09:43.425 fused_ordering(478) 00:09:43.425 fused_ordering(479) 00:09:43.425 fused_ordering(480) 00:09:43.425 fused_ordering(481) 00:09:43.425 fused_ordering(482) 00:09:43.425 fused_ordering(483) 00:09:43.425 fused_ordering(484) 00:09:43.425 fused_ordering(485) 00:09:43.425 fused_ordering(486) 00:09:43.425 fused_ordering(487) 00:09:43.425 fused_ordering(488) 00:09:43.425 fused_ordering(489) 00:09:43.425 fused_ordering(490) 00:09:43.425 fused_ordering(491) 00:09:43.425 fused_ordering(492) 00:09:43.425 fused_ordering(493) 00:09:43.425 fused_ordering(494) 00:09:43.425 fused_ordering(495) 00:09:43.425 fused_ordering(496) 00:09:43.425 fused_ordering(497) 00:09:43.425 fused_ordering(498) 00:09:43.425 fused_ordering(499) 00:09:43.425 fused_ordering(500) 00:09:43.425 fused_ordering(501) 00:09:43.425 fused_ordering(502) 00:09:43.425 fused_ordering(503) 00:09:43.425 fused_ordering(504) 00:09:43.425 fused_ordering(505) 00:09:43.426 fused_ordering(506) 00:09:43.426 fused_ordering(507) 00:09:43.426 fused_ordering(508) 00:09:43.426 fused_ordering(509) 00:09:43.426 fused_ordering(510) 00:09:43.426 fused_ordering(511) 00:09:43.426 fused_ordering(512) 00:09:43.426 fused_ordering(513) 00:09:43.426 fused_ordering(514) 00:09:43.426 fused_ordering(515) 00:09:43.426 fused_ordering(516) 00:09:43.426 fused_ordering(517) 00:09:43.426 fused_ordering(518) 00:09:43.426 fused_ordering(519) 00:09:43.426 fused_ordering(520) 00:09:43.426 fused_ordering(521) 00:09:43.426 fused_ordering(522) 00:09:43.426 fused_ordering(523) 00:09:43.426 fused_ordering(524) 00:09:43.426 fused_ordering(525) 00:09:43.426 fused_ordering(526) 00:09:43.426 fused_ordering(527) 00:09:43.426 fused_ordering(528) 00:09:43.426 fused_ordering(529) 00:09:43.426 fused_ordering(530) 00:09:43.426 fused_ordering(531) 00:09:43.426 fused_ordering(532) 00:09:43.426 fused_ordering(533) 00:09:43.426 fused_ordering(534) 00:09:43.426 fused_ordering(535) 00:09:43.426 fused_ordering(536) 00:09:43.426 fused_ordering(537) 00:09:43.426 fused_ordering(538) 00:09:43.426 fused_ordering(539) 00:09:43.426 fused_ordering(540) 00:09:43.426 fused_ordering(541) 00:09:43.426 fused_ordering(542) 00:09:43.426 fused_ordering(543) 00:09:43.426 fused_ordering(544) 00:09:43.426 fused_ordering(545) 00:09:43.426 fused_ordering(546) 00:09:43.426 fused_ordering(547) 00:09:43.426 fused_ordering(548) 00:09:43.426 fused_ordering(549) 00:09:43.426 fused_ordering(550) 00:09:43.426 fused_ordering(551) 00:09:43.426 fused_ordering(552) 00:09:43.426 fused_ordering(553) 00:09:43.426 fused_ordering(554) 00:09:43.426 fused_ordering(555) 00:09:43.426 fused_ordering(556) 00:09:43.426 fused_ordering(557) 00:09:43.426 fused_ordering(558) 00:09:43.426 fused_ordering(559) 00:09:43.426 fused_ordering(560) 00:09:43.426 fused_ordering(561) 00:09:43.426 fused_ordering(562) 00:09:43.426 fused_ordering(563) 00:09:43.426 fused_ordering(564) 00:09:43.426 fused_ordering(565) 00:09:43.426 fused_ordering(566) 00:09:43.426 fused_ordering(567) 00:09:43.426 fused_ordering(568) 00:09:43.426 fused_ordering(569) 00:09:43.426 fused_ordering(570) 00:09:43.426 fused_ordering(571) 00:09:43.426 fused_ordering(572) 00:09:43.426 fused_ordering(573) 00:09:43.426 fused_ordering(574) 00:09:43.426 fused_ordering(575) 00:09:43.426 fused_ordering(576) 00:09:43.426 fused_ordering(577) 00:09:43.426 fused_ordering(578) 00:09:43.426 fused_ordering(579) 00:09:43.426 fused_ordering(580) 00:09:43.426 fused_ordering(581) 00:09:43.426 fused_ordering(582) 00:09:43.426 fused_ordering(583) 00:09:43.426 fused_ordering(584) 00:09:43.426 fused_ordering(585) 00:09:43.426 fused_ordering(586) 00:09:43.426 fused_ordering(587) 00:09:43.426 fused_ordering(588) 00:09:43.426 fused_ordering(589) 00:09:43.426 fused_ordering(590) 00:09:43.426 fused_ordering(591) 00:09:43.426 fused_ordering(592) 00:09:43.426 fused_ordering(593) 00:09:43.426 fused_ordering(594) 00:09:43.426 fused_ordering(595) 00:09:43.426 fused_ordering(596) 00:09:43.426 fused_ordering(597) 00:09:43.426 fused_ordering(598) 00:09:43.426 fused_ordering(599) 00:09:43.426 fused_ordering(600) 00:09:43.426 fused_ordering(601) 00:09:43.426 fused_ordering(602) 00:09:43.426 fused_ordering(603) 00:09:43.426 fused_ordering(604) 00:09:43.426 fused_ordering(605) 00:09:43.426 fused_ordering(606) 00:09:43.426 fused_ordering(607) 00:09:43.426 fused_ordering(608) 00:09:43.426 fused_ordering(609) 00:09:43.426 fused_ordering(610) 00:09:43.426 fused_ordering(611) 00:09:43.426 fused_ordering(612) 00:09:43.426 fused_ordering(613) 00:09:43.426 fused_ordering(614) 00:09:43.426 fused_ordering(615) 00:09:44.359 fused_ordering(616) 00:09:44.359 fused_ordering(617) 00:09:44.359 fused_ordering(618) 00:09:44.359 fused_ordering(619) 00:09:44.359 fused_ordering(620) 00:09:44.359 fused_ordering(621) 00:09:44.359 fused_ordering(622) 00:09:44.359 fused_ordering(623) 00:09:44.359 fused_ordering(624) 00:09:44.359 fused_ordering(625) 00:09:44.359 fused_ordering(626) 00:09:44.359 fused_ordering(627) 00:09:44.359 fused_ordering(628) 00:09:44.359 fused_ordering(629) 00:09:44.359 fused_ordering(630) 00:09:44.359 fused_ordering(631) 00:09:44.359 fused_ordering(632) 00:09:44.359 fused_ordering(633) 00:09:44.359 fused_ordering(634) 00:09:44.359 fused_ordering(635) 00:09:44.359 fused_ordering(636) 00:09:44.359 fused_ordering(637) 00:09:44.359 fused_ordering(638) 00:09:44.359 fused_ordering(639) 00:09:44.359 fused_ordering(640) 00:09:44.359 fused_ordering(641) 00:09:44.359 fused_ordering(642) 00:09:44.359 fused_ordering(643) 00:09:44.359 fused_ordering(644) 00:09:44.359 fused_ordering(645) 00:09:44.359 fused_ordering(646) 00:09:44.359 fused_ordering(647) 00:09:44.359 fused_ordering(648) 00:09:44.359 fused_ordering(649) 00:09:44.359 fused_ordering(650) 00:09:44.359 fused_ordering(651) 00:09:44.359 fused_ordering(652) 00:09:44.359 fused_ordering(653) 00:09:44.359 fused_ordering(654) 00:09:44.359 fused_ordering(655) 00:09:44.359 fused_ordering(656) 00:09:44.359 fused_ordering(657) 00:09:44.359 fused_ordering(658) 00:09:44.359 fused_ordering(659) 00:09:44.359 fused_ordering(660) 00:09:44.359 fused_ordering(661) 00:09:44.359 fused_ordering(662) 00:09:44.359 fused_ordering(663) 00:09:44.359 fused_ordering(664) 00:09:44.359 fused_ordering(665) 00:09:44.359 fused_ordering(666) 00:09:44.359 fused_ordering(667) 00:09:44.359 fused_ordering(668) 00:09:44.359 fused_ordering(669) 00:09:44.359 fused_ordering(670) 00:09:44.359 fused_ordering(671) 00:09:44.359 fused_ordering(672) 00:09:44.359 fused_ordering(673) 00:09:44.359 fused_ordering(674) 00:09:44.359 fused_ordering(675) 00:09:44.359 fused_ordering(676) 00:09:44.359 fused_ordering(677) 00:09:44.359 fused_ordering(678) 00:09:44.359 fused_ordering(679) 00:09:44.359 fused_ordering(680) 00:09:44.359 fused_ordering(681) 00:09:44.359 fused_ordering(682) 00:09:44.359 fused_ordering(683) 00:09:44.359 fused_ordering(684) 00:09:44.359 fused_ordering(685) 00:09:44.359 fused_ordering(686) 00:09:44.359 fused_ordering(687) 00:09:44.359 fused_ordering(688) 00:09:44.359 fused_ordering(689) 00:09:44.359 fused_ordering(690) 00:09:44.359 fused_ordering(691) 00:09:44.359 fused_ordering(692) 00:09:44.359 fused_ordering(693) 00:09:44.359 fused_ordering(694) 00:09:44.359 fused_ordering(695) 00:09:44.359 fused_ordering(696) 00:09:44.359 fused_ordering(697) 00:09:44.359 fused_ordering(698) 00:09:44.359 fused_ordering(699) 00:09:44.359 fused_ordering(700) 00:09:44.359 fused_ordering(701) 00:09:44.359 fused_ordering(702) 00:09:44.359 fused_ordering(703) 00:09:44.359 fused_ordering(704) 00:09:44.359 fused_ordering(705) 00:09:44.359 fused_ordering(706) 00:09:44.359 fused_ordering(707) 00:09:44.359 fused_ordering(708) 00:09:44.359 fused_ordering(709) 00:09:44.359 fused_ordering(710) 00:09:44.359 fused_ordering(711) 00:09:44.359 fused_ordering(712) 00:09:44.359 fused_ordering(713) 00:09:44.359 fused_ordering(714) 00:09:44.359 fused_ordering(715) 00:09:44.359 fused_ordering(716) 00:09:44.359 fused_ordering(717) 00:09:44.359 fused_ordering(718) 00:09:44.359 fused_ordering(719) 00:09:44.359 fused_ordering(720) 00:09:44.359 fused_ordering(721) 00:09:44.359 fused_ordering(722) 00:09:44.359 fused_ordering(723) 00:09:44.359 fused_ordering(724) 00:09:44.359 fused_ordering(725) 00:09:44.359 fused_ordering(726) 00:09:44.359 fused_ordering(727) 00:09:44.359 fused_ordering(728) 00:09:44.359 fused_ordering(729) 00:09:44.359 fused_ordering(730) 00:09:44.359 fused_ordering(731) 00:09:44.359 fused_ordering(732) 00:09:44.359 fused_ordering(733) 00:09:44.360 fused_ordering(734) 00:09:44.360 fused_ordering(735) 00:09:44.360 fused_ordering(736) 00:09:44.360 fused_ordering(737) 00:09:44.360 fused_ordering(738) 00:09:44.360 fused_ordering(739) 00:09:44.360 fused_ordering(740) 00:09:44.360 fused_ordering(741) 00:09:44.360 fused_ordering(742) 00:09:44.360 fused_ordering(743) 00:09:44.360 fused_ordering(744) 00:09:44.360 fused_ordering(745) 00:09:44.360 fused_ordering(746) 00:09:44.360 fused_ordering(747) 00:09:44.360 fused_ordering(748) 00:09:44.360 fused_ordering(749) 00:09:44.360 fused_ordering(750) 00:09:44.360 fused_ordering(751) 00:09:44.360 fused_ordering(752) 00:09:44.360 fused_ordering(753) 00:09:44.360 fused_ordering(754) 00:09:44.360 fused_ordering(755) 00:09:44.360 fused_ordering(756) 00:09:44.360 fused_ordering(757) 00:09:44.360 fused_ordering(758) 00:09:44.360 fused_ordering(759) 00:09:44.360 fused_ordering(760) 00:09:44.360 fused_ordering(761) 00:09:44.360 fused_ordering(762) 00:09:44.360 fused_ordering(763) 00:09:44.360 fused_ordering(764) 00:09:44.360 fused_ordering(765) 00:09:44.360 fused_ordering(766) 00:09:44.360 fused_ordering(767) 00:09:44.360 fused_ordering(768) 00:09:44.360 fused_ordering(769) 00:09:44.360 fused_ordering(770) 00:09:44.360 fused_ordering(771) 00:09:44.360 fused_ordering(772) 00:09:44.360 fused_ordering(773) 00:09:44.360 fused_ordering(774) 00:09:44.360 fused_ordering(775) 00:09:44.360 fused_ordering(776) 00:09:44.360 fused_ordering(777) 00:09:44.360 fused_ordering(778) 00:09:44.360 fused_ordering(779) 00:09:44.360 fused_ordering(780) 00:09:44.360 fused_ordering(781) 00:09:44.360 fused_ordering(782) 00:09:44.360 fused_ordering(783) 00:09:44.360 fused_ordering(784) 00:09:44.360 fused_ordering(785) 00:09:44.360 fused_ordering(786) 00:09:44.360 fused_ordering(787) 00:09:44.360 fused_ordering(788) 00:09:44.360 fused_ordering(789) 00:09:44.360 fused_ordering(790) 00:09:44.360 fused_ordering(791) 00:09:44.360 fused_ordering(792) 00:09:44.360 fused_ordering(793) 00:09:44.360 fused_ordering(794) 00:09:44.360 fused_ordering(795) 00:09:44.360 fused_ordering(796) 00:09:44.360 fused_ordering(797) 00:09:44.360 fused_ordering(798) 00:09:44.360 fused_ordering(799) 00:09:44.360 fused_ordering(800) 00:09:44.360 fused_ordering(801) 00:09:44.360 fused_ordering(802) 00:09:44.360 fused_ordering(803) 00:09:44.360 fused_ordering(804) 00:09:44.360 fused_ordering(805) 00:09:44.360 fused_ordering(806) 00:09:44.360 fused_ordering(807) 00:09:44.360 fused_ordering(808) 00:09:44.360 fused_ordering(809) 00:09:44.360 fused_ordering(810) 00:09:44.360 fused_ordering(811) 00:09:44.360 fused_ordering(812) 00:09:44.360 fused_ordering(813) 00:09:44.360 fused_ordering(814) 00:09:44.360 fused_ordering(815) 00:09:44.360 fused_ordering(816) 00:09:44.360 fused_ordering(817) 00:09:44.360 fused_ordering(818) 00:09:44.360 fused_ordering(819) 00:09:44.360 fused_ordering(820) 00:09:45.295 fused_ordering(821) 00:09:45.295 fused_ordering(822) 00:09:45.295 fused_ordering(823) 00:09:45.295 fused_ordering(824) 00:09:45.295 fused_ordering(825) 00:09:45.295 fused_ordering(826) 00:09:45.295 fused_ordering(827) 00:09:45.295 fused_ordering(828) 00:09:45.295 fused_ordering(829) 00:09:45.295 fused_ordering(830) 00:09:45.295 fused_ordering(831) 00:09:45.295 fused_ordering(832) 00:09:45.295 fused_ordering(833) 00:09:45.295 fused_ordering(834) 00:09:45.295 fused_ordering(835) 00:09:45.295 fused_ordering(836) 00:09:45.295 fused_ordering(837) 00:09:45.295 fused_ordering(838) 00:09:45.295 fused_ordering(839) 00:09:45.295 fused_ordering(840) 00:09:45.295 fused_ordering(841) 00:09:45.295 fused_ordering(842) 00:09:45.295 fused_ordering(843) 00:09:45.295 fused_ordering(844) 00:09:45.295 fused_ordering(845) 00:09:45.295 fused_ordering(846) 00:09:45.295 fused_ordering(847) 00:09:45.295 fused_ordering(848) 00:09:45.295 fused_ordering(849) 00:09:45.295 fused_ordering(850) 00:09:45.295 fused_ordering(851) 00:09:45.295 fused_ordering(852) 00:09:45.295 fused_ordering(853) 00:09:45.295 fused_ordering(854) 00:09:45.295 fused_ordering(855) 00:09:45.295 fused_ordering(856) 00:09:45.295 fused_ordering(857) 00:09:45.295 fused_ordering(858) 00:09:45.295 fused_ordering(859) 00:09:45.295 fused_ordering(860) 00:09:45.295 fused_ordering(861) 00:09:45.295 fused_ordering(862) 00:09:45.295 fused_ordering(863) 00:09:45.295 fused_ordering(864) 00:09:45.295 fused_ordering(865) 00:09:45.295 fused_ordering(866) 00:09:45.295 fused_ordering(867) 00:09:45.295 fused_ordering(868) 00:09:45.295 fused_ordering(869) 00:09:45.295 fused_ordering(870) 00:09:45.295 fused_ordering(871) 00:09:45.295 fused_ordering(872) 00:09:45.295 fused_ordering(873) 00:09:45.295 fused_ordering(874) 00:09:45.295 fused_ordering(875) 00:09:45.295 fused_ordering(876) 00:09:45.295 fused_ordering(877) 00:09:45.295 fused_ordering(878) 00:09:45.295 fused_ordering(879) 00:09:45.295 fused_ordering(880) 00:09:45.295 fused_ordering(881) 00:09:45.295 fused_ordering(882) 00:09:45.295 fused_ordering(883) 00:09:45.295 fused_ordering(884) 00:09:45.295 fused_ordering(885) 00:09:45.295 fused_ordering(886) 00:09:45.295 fused_ordering(887) 00:09:45.295 fused_ordering(888) 00:09:45.295 fused_ordering(889) 00:09:45.295 fused_ordering(890) 00:09:45.295 fused_ordering(891) 00:09:45.295 fused_ordering(892) 00:09:45.295 fused_ordering(893) 00:09:45.295 fused_ordering(894) 00:09:45.295 fused_ordering(895) 00:09:45.295 fused_ordering(896) 00:09:45.295 fused_ordering(897) 00:09:45.295 fused_ordering(898) 00:09:45.295 fused_ordering(899) 00:09:45.295 fused_ordering(900) 00:09:45.295 fused_ordering(901) 00:09:45.295 fused_ordering(902) 00:09:45.295 fused_ordering(903) 00:09:45.295 fused_ordering(904) 00:09:45.295 fused_ordering(905) 00:09:45.295 fused_ordering(906) 00:09:45.295 fused_ordering(907) 00:09:45.295 fused_ordering(908) 00:09:45.295 fused_ordering(909) 00:09:45.295 fused_ordering(910) 00:09:45.295 fused_ordering(911) 00:09:45.295 fused_ordering(912) 00:09:45.295 fused_ordering(913) 00:09:45.295 fused_ordering(914) 00:09:45.295 fused_ordering(915) 00:09:45.295 fused_ordering(916) 00:09:45.295 fused_ordering(917) 00:09:45.295 fused_ordering(918) 00:09:45.295 fused_ordering(919) 00:09:45.295 fused_ordering(920) 00:09:45.295 fused_ordering(921) 00:09:45.295 fused_ordering(922) 00:09:45.295 fused_ordering(923) 00:09:45.295 fused_ordering(924) 00:09:45.295 fused_ordering(925) 00:09:45.295 fused_ordering(926) 00:09:45.295 fused_ordering(927) 00:09:45.295 fused_ordering(928) 00:09:45.295 fused_ordering(929) 00:09:45.295 fused_ordering(930) 00:09:45.295 fused_ordering(931) 00:09:45.295 fused_ordering(932) 00:09:45.295 fused_ordering(933) 00:09:45.295 fused_ordering(934) 00:09:45.295 fused_ordering(935) 00:09:45.295 fused_ordering(936) 00:09:45.295 fused_ordering(937) 00:09:45.295 fused_ordering(938) 00:09:45.295 fused_ordering(939) 00:09:45.295 fused_ordering(940) 00:09:45.295 fused_ordering(941) 00:09:45.295 fused_ordering(942) 00:09:45.295 fused_ordering(943) 00:09:45.295 fused_ordering(944) 00:09:45.295 fused_ordering(945) 00:09:45.295 fused_ordering(946) 00:09:45.295 fused_ordering(947) 00:09:45.295 fused_ordering(948) 00:09:45.295 fused_ordering(949) 00:09:45.295 fused_ordering(950) 00:09:45.295 fused_ordering(951) 00:09:45.295 fused_ordering(952) 00:09:45.295 fused_ordering(953) 00:09:45.295 fused_ordering(954) 00:09:45.295 fused_ordering(955) 00:09:45.295 fused_ordering(956) 00:09:45.295 fused_ordering(957) 00:09:45.295 fused_ordering(958) 00:09:45.295 fused_ordering(959) 00:09:45.295 fused_ordering(960) 00:09:45.295 fused_ordering(961) 00:09:45.295 fused_ordering(962) 00:09:45.295 fused_ordering(963) 00:09:45.295 fused_ordering(964) 00:09:45.295 fused_ordering(965) 00:09:45.295 fused_ordering(966) 00:09:45.295 fused_ordering(967) 00:09:45.295 fused_ordering(968) 00:09:45.295 fused_ordering(969) 00:09:45.295 fused_ordering(970) 00:09:45.295 fused_ordering(971) 00:09:45.295 fused_ordering(972) 00:09:45.295 fused_ordering(973) 00:09:45.295 fused_ordering(974) 00:09:45.295 fused_ordering(975) 00:09:45.295 fused_ordering(976) 00:09:45.295 fused_ordering(977) 00:09:45.295 fused_ordering(978) 00:09:45.295 fused_ordering(979) 00:09:45.295 fused_ordering(980) 00:09:45.295 fused_ordering(981) 00:09:45.295 fused_ordering(982) 00:09:45.295 fused_ordering(983) 00:09:45.295 fused_ordering(984) 00:09:45.295 fused_ordering(985) 00:09:45.295 fused_ordering(986) 00:09:45.295 fused_ordering(987) 00:09:45.295 fused_ordering(988) 00:09:45.295 fused_ordering(989) 00:09:45.295 fused_ordering(990) 00:09:45.295 fused_ordering(991) 00:09:45.295 fused_ordering(992) 00:09:45.295 fused_ordering(993) 00:09:45.295 fused_ordering(994) 00:09:45.295 fused_ordering(995) 00:09:45.295 fused_ordering(996) 00:09:45.295 fused_ordering(997) 00:09:45.295 fused_ordering(998) 00:09:45.295 fused_ordering(999) 00:09:45.295 fused_ordering(1000) 00:09:45.295 fused_ordering(1001) 00:09:45.295 fused_ordering(1002) 00:09:45.295 fused_ordering(1003) 00:09:45.295 fused_ordering(1004) 00:09:45.295 fused_ordering(1005) 00:09:45.295 fused_ordering(1006) 00:09:45.295 fused_ordering(1007) 00:09:45.295 fused_ordering(1008) 00:09:45.295 fused_ordering(1009) 00:09:45.295 fused_ordering(1010) 00:09:45.295 fused_ordering(1011) 00:09:45.295 fused_ordering(1012) 00:09:45.295 fused_ordering(1013) 00:09:45.295 fused_ordering(1014) 00:09:45.295 fused_ordering(1015) 00:09:45.295 fused_ordering(1016) 00:09:45.295 fused_ordering(1017) 00:09:45.295 fused_ordering(1018) 00:09:45.295 fused_ordering(1019) 00:09:45.296 fused_ordering(1020) 00:09:45.296 fused_ordering(1021) 00:09:45.296 fused_ordering(1022) 00:09:45.296 fused_ordering(1023) 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:45.296 rmmod nvme_tcp 00:09:45.296 rmmod nvme_fabrics 00:09:45.296 rmmod nvme_keyring 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 809034 ']' 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 809034 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' -z 809034 ']' 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # kill -0 809034 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # uname 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 809034 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # echo 'killing process with pid 809034' 00:09:45.296 killing process with pid 809034 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # kill 809034 00:09:45.296 [2024-05-15 00:25:11.199258] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:45.296 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # wait 809034 00:09:45.555 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:45.555 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:45.555 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:45.555 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.555 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:45.555 00:25:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.555 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.555 00:25:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.458 00:25:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.458 00:09:47.458 real 0m9.324s 00:09:47.458 user 0m6.797s 00:09:47.458 sys 0m4.615s 00:09:47.458 00:25:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:47.458 00:25:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:47.458 ************************************ 00:09:47.458 END TEST nvmf_fused_ordering 00:09:47.458 ************************************ 00:09:47.458 00:25:13 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:47.458 00:25:13 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:47.458 00:25:13 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:47.458 00:25:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.458 ************************************ 00:09:47.458 START TEST nvmf_delete_subsystem 00:09:47.458 ************************************ 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:47.458 * Looking for test storage... 00:09:47.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.458 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.716 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.717 00:25:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:50.251 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:50.251 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:50.251 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:50.251 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:50.251 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:50.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:09:50.252 00:09:50.252 --- 10.0.0.2 ping statistics --- 00:09:50.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.252 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:09:50.252 00:09:50.252 --- 10.0.0.1 ping statistics --- 00:09:50.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.252 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=811811 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 811811 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # '[' -z 811811 ']' 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:50.252 00:25:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:50.252 [2024-05-15 00:25:16.249121] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:09:50.252 [2024-05-15 00:25:16.249217] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.252 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.252 [2024-05-15 00:25:16.329766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:50.511 [2024-05-15 00:25:16.447175] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.511 [2024-05-15 00:25:16.447241] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.511 [2024-05-15 00:25:16.447257] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.511 [2024-05-15 00:25:16.447271] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.511 [2024-05-15 00:25:16.447282] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.511 [2024-05-15 00:25:16.447372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.511 [2024-05-15 00:25:16.447378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # return 0 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.077 [2024-05-15 00:25:17.208222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.077 [2024-05-15 00:25:17.224239] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:51.077 [2024-05-15 00:25:17.224518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.077 NULL1 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.077 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.335 Delay0 00:09:51.335 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.335 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.335 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.335 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.335 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.335 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=811966 00:09:51.335 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:51.335 00:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:51.335 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.335 [2024-05-15 00:25:17.299232] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:53.235 00:25:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.235 00:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:53.235 00:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 [2024-05-15 00:25:19.391230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdcd4000c00 is same with the state(5) to be set 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 starting I/O failed: -6 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Write completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.235 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 starting I/O failed: -6 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 starting I/O failed: -6 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 starting I/O failed: -6 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 starting I/O failed: -6 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 starting I/O failed: -6 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 starting I/O failed: -6 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Write completed with error (sct=0, sc=8) 00:09:53.236 [2024-05-15 00:25:19.392250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2015a60 is same with the state(5) to be set 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:53.236 Read completed with error (sct=0, sc=8) 00:09:54.202 [2024-05-15 00:25:20.357523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20347f0 is same with the state(5) to be set 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 [2024-05-15 00:25:20.393310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdcd400c2f0 is same with the state(5) to be set 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Write completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.460 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 [2024-05-15 00:25:20.394344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2014e10 is same with the state(5) to be set 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 [2024-05-15 00:25:20.394614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2015880 is same with the state(5) to be set 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Write completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 Read completed with error (sct=0, sc=8) 00:09:54.461 [2024-05-15 00:25:20.394895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203b790 is same with the state(5) to be set 00:09:54.461 Initializing NVMe Controllers 00:09:54.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.461 Controller IO queue size 128, less than required. 00:09:54.461 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:54.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:54.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:54.461 Initialization complete. Launching workers. 00:09:54.461 ======================================================== 00:09:54.461 Latency(us) 00:09:54.461 Device Information : IOPS MiB/s Average min max 00:09:54.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.41 0.10 944429.03 1224.62 1013419.12 00:09:54.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.26 0.07 889429.13 602.68 1013966.88 00:09:54.461 ======================================================== 00:09:54.461 Total : 347.68 0.17 920342.05 602.68 1013966.88 00:09:54.461 00:09:54.461 [2024-05-15 00:25:20.395874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20347f0 (9): Bad file descriptor 00:09:54.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:54.461 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:54.461 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:54.461 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 811966 00:09:54.461 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 811966 00:09:55.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (811966) - No such process 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 811966 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 811966 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 811966 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.026 [2024-05-15 00:25:20.915558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=812369 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 812369 00:09:55.026 00:25:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:55.026 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.026 [2024-05-15 00:25:20.971177] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:55.284 00:25:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:55.284 00:25:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 812369 00:09:55.284 00:25:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:55.849 00:25:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:55.849 00:25:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 812369 00:09:55.849 00:25:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:56.415 00:25:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:56.415 00:25:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 812369 00:09:56.415 00:25:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:56.982 00:25:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:56.982 00:25:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 812369 00:09:56.982 00:25:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:57.550 00:25:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:57.550 00:25:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 812369 00:09:57.550 00:25:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:57.808 00:25:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:57.808 00:25:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 812369 00:09:57.808 00:25:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:58.374 Initializing NVMe Controllers 00:09:58.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:58.374 Controller IO queue size 128, less than required. 00:09:58.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:58.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:58.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:58.374 Initialization complete. Launching workers. 00:09:58.374 ======================================================== 00:09:58.374 Latency(us) 00:09:58.374 Device Information : IOPS MiB/s Average min max 00:09:58.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004416.77 1000243.90 1042197.34 00:09:58.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004573.07 1000282.71 1041179.87 00:09:58.374 ======================================================== 00:09:58.374 Total : 256.00 0.12 1004494.92 1000243.90 1042197.34 00:09:58.374 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 812369 00:09:58.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (812369) - No such process 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 812369 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:58.374 rmmod nvme_tcp 00:09:58.374 rmmod nvme_fabrics 00:09:58.374 rmmod nvme_keyring 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 811811 ']' 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 811811 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' -z 811811 ']' 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # kill -0 811811 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # uname 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:58.374 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 811811 00:09:58.633 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:09:58.633 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:09:58.633 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # echo 'killing process with pid 811811' 00:09:58.633 killing process with pid 811811 00:09:58.633 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # kill 811811 00:09:58.633 [2024-05-15 00:25:24.540885] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:58.633 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # wait 811811 00:09:58.893 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:58.893 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:58.893 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:58.893 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.893 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.893 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.893 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.893 00:25:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.800 00:25:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.800 00:10:00.800 real 0m13.284s 00:10:00.800 user 0m29.382s 00:10:00.800 sys 0m3.227s 00:10:00.800 00:25:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:00.800 00:25:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.800 ************************************ 00:10:00.800 END TEST nvmf_delete_subsystem 00:10:00.800 ************************************ 00:10:00.800 00:25:26 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:00.800 00:25:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:10:00.800 00:25:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:00.800 00:25:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:00.800 ************************************ 00:10:00.800 START TEST nvmf_ns_masking 00:10:00.800 ************************************ 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:00.800 * Looking for test storage... 00:10:00.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=eef71715-7d8b-4262-a38f-eb58b0e2c006 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.800 00:25:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.058 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:01.058 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:01.058 00:25:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:01.058 00:25:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:03.591 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.591 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:03.592 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:03.592 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:03.592 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:03.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:10:03.592 00:10:03.592 --- 10.0.0.2 ping statistics --- 00:10:03.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.592 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:10:03.592 00:10:03.592 --- 10.0.0.1 ping statistics --- 00:10:03.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.592 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=815126 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 815126 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # '[' -z 815126 ']' 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:03.592 00:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:03.592 [2024-05-15 00:25:29.699989] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:10:03.592 [2024-05-15 00:25:29.700089] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.592 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.851 [2024-05-15 00:25:29.781177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.851 [2024-05-15 00:25:29.898881] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.851 [2024-05-15 00:25:29.898955] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.851 [2024-05-15 00:25:29.898973] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.851 [2024-05-15 00:25:29.898986] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.851 [2024-05-15 00:25:29.898998] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.851 [2024-05-15 00:25:29.899065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.851 [2024-05-15 00:25:29.899116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.851 [2024-05-15 00:25:29.899244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.851 [2024-05-15 00:25:29.899247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.785 00:25:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:04.785 00:25:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@861 -- # return 0 00:10:04.785 00:25:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:04.785 00:25:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:04.785 00:25:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:04.785 00:25:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.785 00:25:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:04.785 [2024-05-15 00:25:30.942779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.042 00:25:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:10:05.042 00:25:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:10:05.042 00:25:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:05.300 Malloc1 00:10:05.300 00:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:05.558 Malloc2 00:10:05.558 00:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:05.815 00:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:06.071 00:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.328 [2024-05-15 00:25:32.265656] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:06.328 [2024-05-15 00:25:32.265977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.328 00:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:10:06.328 00:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eef71715-7d8b-4262-a38f-eb58b0e2c006 -a 10.0.0.2 -s 4420 -i 4 00:10:06.328 00:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:10:06.328 00:25:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:10:06.328 00:25:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.328 00:25:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:10:06.328 00:25:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:10:08.225 00:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:10:08.225 00:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:10:08.225 00:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:08.483 [ 0]:0x1 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=645ee38625984382ad26cf3a4cb43e90 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 645ee38625984382ad26cf3a4cb43e90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:08.483 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:08.740 [ 0]:0x1 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=645ee38625984382ad26cf3a4cb43e90 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 645ee38625984382ad26cf3a4cb43e90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:08.740 [ 1]:0x2 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:08.740 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:08.998 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5802f9f3b57f4df9955ecedeafb18775 00:10:08.998 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5802f9f3b57f4df9955ecedeafb18775 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:08.998 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:10:08.998 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:08.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.998 00:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.256 00:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:09.514 00:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:10:09.514 00:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eef71715-7d8b-4262-a38f-eb58b0e2c006 -a 10.0.0.2 -s 4420 -i 4 00:10:09.514 00:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:09.514 00:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:10:09.514 00:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:10:09.514 00:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 1 ]] 00:10:09.514 00:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=1 00:10:09.514 00:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:10:11.440 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:10:11.440 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:10:11.440 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:10:11.440 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:10:11.440 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:10:11.440 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:10:11.440 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:11.440 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:11.697 [ 0]:0x2 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5802f9f3b57f4df9955ecedeafb18775 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5802f9f3b57f4df9955ecedeafb18775 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:11.697 00:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:11.955 [ 0]:0x1 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=645ee38625984382ad26cf3a4cb43e90 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 645ee38625984382ad26cf3a4cb43e90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:11.955 [ 1]:0x2 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5802f9f3b57f4df9955ecedeafb18775 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5802f9f3b57f4df9955ecedeafb18775 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:11.955 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:12.212 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:10:12.212 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:10:12.212 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:10:12.212 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:10:12.212 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:12.213 [ 0]:0x2 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:12.213 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:12.470 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5802f9f3b57f4df9955ecedeafb18775 00:10:12.470 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5802f9f3b57f4df9955ecedeafb18775 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:12.470 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:10:12.470 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.470 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:12.727 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:10:12.727 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eef71715-7d8b-4262-a38f-eb58b0e2c006 -a 10.0.0.2 -s 4420 -i 4 00:10:12.727 00:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:12.727 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:10:12.727 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.727 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:10:12.727 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:10:12.728 00:25:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:10:14.628 00:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:10:14.628 00:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:10:14.628 00:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.628 00:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:10:14.628 00:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.628 00:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:10:14.628 00:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:14.628 00:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:14.885 00:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:14.885 00:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:14.885 00:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:10:14.885 00:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:14.885 00:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:14.885 [ 0]:0x1 00:10:14.885 00:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:14.885 00:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:14.885 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=645ee38625984382ad26cf3a4cb43e90 00:10:14.885 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 645ee38625984382ad26cf3a4cb43e90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:14.885 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:10:14.885 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:14.885 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:15.142 [ 1]:0x2 00:10:15.142 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:15.142 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:15.142 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5802f9f3b57f4df9955ecedeafb18775 00:10:15.142 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5802f9f3b57f4df9955ecedeafb18775 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:15.142 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:15.399 [ 0]:0x2 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5802f9f3b57f4df9955ecedeafb18775 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5802f9f3b57f4df9955ecedeafb18775 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:15.399 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:15.655 [2024-05-15 00:25:41.723885] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:15.655 request: 00:10:15.655 { 00:10:15.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.655 "nsid": 2, 00:10:15.655 "host": "nqn.2016-06.io.spdk:host1", 00:10:15.655 "method": "nvmf_ns_remove_host", 00:10:15.655 "req_id": 1 00:10:15.655 } 00:10:15.655 Got JSON-RPC error response 00:10:15.655 response: 00:10:15.655 { 00:10:15.655 "code": -32602, 00:10:15.655 "message": "Invalid parameters" 00:10:15.655 } 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:15.655 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:15.912 [ 0]:0x2 00:10:15.912 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:15.912 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:15.912 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5802f9f3b57f4df9955ecedeafb18775 00:10:15.912 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5802f9f3b57f4df9955ecedeafb18775 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:15.912 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:10:15.912 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.912 00:25:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:16.169 rmmod nvme_tcp 00:10:16.169 rmmod nvme_fabrics 00:10:16.169 rmmod nvme_keyring 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 815126 ']' 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 815126 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' -z 815126 ']' 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # kill -0 815126 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # uname 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 815126 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # echo 'killing process with pid 815126' 00:10:16.169 killing process with pid 815126 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # kill 815126 00:10:16.169 [2024-05-15 00:25:42.309197] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:16.169 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@971 -- # wait 815126 00:10:16.736 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:16.736 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:16.736 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:16.736 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.736 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:16.736 00:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.736 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.736 00:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.647 00:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:18.647 00:10:18.648 real 0m17.800s 00:10:18.648 user 0m54.444s 00:10:18.648 sys 0m4.254s 00:10:18.648 00:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:18.648 00:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:18.648 ************************************ 00:10:18.648 END TEST nvmf_ns_masking 00:10:18.648 ************************************ 00:10:18.648 00:25:44 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:18.648 00:25:44 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:18.648 00:25:44 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:10:18.648 00:25:44 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:18.648 00:25:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.648 ************************************ 00:10:18.648 START TEST nvmf_nvme_cli 00:10:18.648 ************************************ 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:18.648 * Looking for test storage... 00:10:18.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:18.648 00:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:18.906 00:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:21.438 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:21.438 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:21.438 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:21.438 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.438 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:21.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:10:21.439 00:10:21.439 --- 10.0.0.2 ping statistics --- 00:10:21.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.439 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:10:21.439 00:10:21.439 --- 10.0.0.1 ping statistics --- 00:10:21.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.439 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=819098 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 819098 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # '[' -z 819098 ']' 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:21.439 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 [2024-05-15 00:25:47.469751] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:10:21.439 [2024-05-15 00:25:47.469825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.439 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.439 [2024-05-15 00:25:47.545414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.697 [2024-05-15 00:25:47.654618] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.697 [2024-05-15 00:25:47.654678] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.697 [2024-05-15 00:25:47.654692] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.697 [2024-05-15 00:25:47.654703] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.697 [2024-05-15 00:25:47.654712] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.697 [2024-05-15 00:25:47.654767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.697 [2024-05-15 00:25:47.654826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.698 [2024-05-15 00:25:47.654891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.698 [2024-05-15 00:25:47.654894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@861 -- # return 0 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.698 [2024-05-15 00:25:47.811849] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.698 Malloc0 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.698 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.956 Malloc1 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.956 [2024-05-15 00:25:47.897746] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:21.956 [2024-05-15 00:25:47.898061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:21.956 00:25:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:21.956 00:10:21.956 Discovery Log Number of Records 2, Generation counter 2 00:10:21.956 =====Discovery Log Entry 0====== 00:10:21.956 trtype: tcp 00:10:21.956 adrfam: ipv4 00:10:21.956 subtype: current discovery subsystem 00:10:21.956 treq: not required 00:10:21.956 portid: 0 00:10:21.956 trsvcid: 4420 00:10:21.956 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:21.956 traddr: 10.0.0.2 00:10:21.956 eflags: explicit discovery connections, duplicate discovery information 00:10:21.956 sectype: none 00:10:21.956 =====Discovery Log Entry 1====== 00:10:21.956 trtype: tcp 00:10:21.956 adrfam: ipv4 00:10:21.956 subtype: nvme subsystem 00:10:21.956 treq: not required 00:10:21.956 portid: 0 00:10:21.956 trsvcid: 4420 00:10:21.956 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:21.956 traddr: 10.0.0.2 00:10:21.956 eflags: none 00:10:21.956 sectype: none 00:10:21.956 00:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:21.956 00:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:21.956 00:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:21.956 00:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.956 00:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:21.956 00:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:21.956 00:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.956 00:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:21.956 00:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:21.956 00:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:21.957 00:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.889 00:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:22.889 00:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local i=0 00:10:22.889 00:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.889 00:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:10:22.889 00:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:10:22.889 00:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # sleep 2 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # return 0 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:24.789 /dev/nvme0n1 ]] 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # local i=0 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1228 -- # return 0 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:24.789 rmmod nvme_tcp 00:10:24.789 rmmod nvme_fabrics 00:10:24.789 rmmod nvme_keyring 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 819098 ']' 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 819098 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' -z 819098 ']' 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # kill -0 819098 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # uname 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 819098 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # echo 'killing process with pid 819098' 00:10:24.789 killing process with pid 819098 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # kill 819098 00:10:24.789 [2024-05-15 00:25:50.908123] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:24.789 00:25:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # wait 819098 00:10:25.355 00:25:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.355 00:25:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.355 00:25:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.356 00:25:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.356 00:25:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.356 00:25:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.356 00:25:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.356 00:25:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.261 00:25:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:27.261 00:10:27.261 real 0m8.541s 00:10:27.261 user 0m14.734s 00:10:27.261 sys 0m2.446s 00:10:27.261 00:25:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:27.261 00:25:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:27.261 ************************************ 00:10:27.261 END TEST nvmf_nvme_cli 00:10:27.261 ************************************ 00:10:27.261 00:25:53 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:27.261 00:25:53 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:27.261 00:25:53 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:10:27.261 00:25:53 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:27.261 00:25:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:27.261 ************************************ 00:10:27.261 START TEST nvmf_vfio_user 00:10:27.261 ************************************ 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:27.261 * Looking for test storage... 00:10:27.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.261 00:25:53 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=819904 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 819904' 00:10:27.262 Process pid: 819904 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 819904 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 819904 ']' 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:27.262 00:25:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:27.519 [2024-05-15 00:25:53.459520] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:10:27.519 [2024-05-15 00:25:53.459607] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.519 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.519 [2024-05-15 00:25:53.527882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.519 [2024-05-15 00:25:53.636297] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.519 [2024-05-15 00:25:53.636357] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.519 [2024-05-15 00:25:53.636383] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.519 [2024-05-15 00:25:53.636396] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.519 [2024-05-15 00:25:53.636407] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.519 [2024-05-15 00:25:53.636503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.519 [2024-05-15 00:25:53.636567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.519 [2024-05-15 00:25:53.636619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.519 [2024-05-15 00:25:53.636622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.481 00:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:28.481 00:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:10:28.481 00:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:29.414 00:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:29.671 00:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:29.671 00:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:29.671 00:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:29.671 00:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:29.671 00:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:29.929 Malloc1 00:10:29.929 00:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:30.187 00:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:30.445 00:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:30.703 [2024-05-15 00:25:56.658893] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:30.703 00:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:30.703 00:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:30.703 00:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:30.961 Malloc2 00:10:30.961 00:25:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:31.219 00:25:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:31.477 00:25:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:31.737 00:25:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:31.737 00:25:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:31.737 00:25:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:31.737 00:25:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:31.737 00:25:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:31.737 00:25:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:31.737 [2024-05-15 00:25:57.701492] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:10:31.737 [2024-05-15 00:25:57.701534] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid820454 ] 00:10:31.737 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.737 [2024-05-15 00:25:57.736258] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:31.737 [2024-05-15 00:25:57.738802] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:31.737 [2024-05-15 00:25:57.738830] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc0346c1000 00:10:31.737 [2024-05-15 00:25:57.739798] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:31.737 [2024-05-15 00:25:57.740793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:31.737 [2024-05-15 00:25:57.741797] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:31.738 [2024-05-15 00:25:57.742799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:31.738 [2024-05-15 00:25:57.743808] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:31.738 [2024-05-15 00:25:57.744813] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:31.738 [2024-05-15 00:25:57.745817] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:31.738 [2024-05-15 00:25:57.746823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:31.738 [2024-05-15 00:25:57.747831] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:31.738 [2024-05-15 00:25:57.747860] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc0346b6000 00:10:31.738 [2024-05-15 00:25:57.749265] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:31.738 [2024-05-15 00:25:57.769199] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:31.738 [2024-05-15 00:25:57.769252] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:31.738 [2024-05-15 00:25:57.774010] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:31.738 [2024-05-15 00:25:57.774074] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:31.738 [2024-05-15 00:25:57.774182] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:31.738 [2024-05-15 00:25:57.774229] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:31.738 [2024-05-15 00:25:57.774241] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:31.738 [2024-05-15 00:25:57.774994] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:31.738 [2024-05-15 00:25:57.775014] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:31.738 [2024-05-15 00:25:57.775026] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:31.738 [2024-05-15 00:25:57.775994] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:31.738 [2024-05-15 00:25:57.776013] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:31.738 [2024-05-15 00:25:57.776026] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:31.738 [2024-05-15 00:25:57.777002] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:31.738 [2024-05-15 00:25:57.777022] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:31.738 [2024-05-15 00:25:57.778009] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:31.738 [2024-05-15 00:25:57.778030] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:31.738 [2024-05-15 00:25:57.778039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:31.738 [2024-05-15 00:25:57.778051] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:31.738 [2024-05-15 00:25:57.778162] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:31.738 [2024-05-15 00:25:57.778170] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:31.738 [2024-05-15 00:25:57.778179] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:31.738 [2024-05-15 00:25:57.779017] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:31.738 [2024-05-15 00:25:57.780022] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:31.738 [2024-05-15 00:25:57.781028] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:31.738 [2024-05-15 00:25:57.782023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:31.738 [2024-05-15 00:25:57.782166] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:31.738 [2024-05-15 00:25:57.783041] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:31.738 [2024-05-15 00:25:57.783060] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:31.738 [2024-05-15 00:25:57.783069] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783093] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:31.738 [2024-05-15 00:25:57.783107] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783138] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:31.738 [2024-05-15 00:25:57.783149] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:31.738 [2024-05-15 00:25:57.783173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:31.738 [2024-05-15 00:25:57.783269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:31.738 [2024-05-15 00:25:57.783289] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:31.738 [2024-05-15 00:25:57.783297] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:31.738 [2024-05-15 00:25:57.783305] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:31.738 [2024-05-15 00:25:57.783312] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:31.738 [2024-05-15 00:25:57.783320] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:31.738 [2024-05-15 00:25:57.783327] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:31.738 [2024-05-15 00:25:57.783335] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783353] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:31.738 [2024-05-15 00:25:57.783388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:31.738 [2024-05-15 00:25:57.783412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.738 [2024-05-15 00:25:57.783426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.738 [2024-05-15 00:25:57.783442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.738 [2024-05-15 00:25:57.783454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:31.738 [2024-05-15 00:25:57.783462] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783473] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:31.738 [2024-05-15 00:25:57.783498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:31.738 [2024-05-15 00:25:57.783509] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:31.738 [2024-05-15 00:25:57.783522] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783534] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783545] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:31.738 [2024-05-15 00:25:57.783571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:31.738 [2024-05-15 00:25:57.783625] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783641] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783656] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:31.738 [2024-05-15 00:25:57.783664] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:31.738 [2024-05-15 00:25:57.783673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:31.738 [2024-05-15 00:25:57.783690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:31.738 [2024-05-15 00:25:57.783715] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:31.738 [2024-05-15 00:25:57.783735] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783750] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:31.738 [2024-05-15 00:25:57.783761] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:31.738 [2024-05-15 00:25:57.783769] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:31.738 [2024-05-15 00:25:57.783778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:31.738 [2024-05-15 00:25:57.783806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:31.739 [2024-05-15 00:25:57.783828] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:31.739 [2024-05-15 00:25:57.783843] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:31.739 [2024-05-15 00:25:57.783854] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:31.739 [2024-05-15 00:25:57.783862] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:31.739 [2024-05-15 00:25:57.783872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:31.739 [2024-05-15 00:25:57.783887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:31.739 [2024-05-15 00:25:57.783906] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:31.739 [2024-05-15 00:25:57.783941] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:31.739 [2024-05-15 00:25:57.783958] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:31.739 [2024-05-15 00:25:57.783968] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:31.739 [2024-05-15 00:25:57.783977] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:31.739 [2024-05-15 00:25:57.783986] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:31.739 [2024-05-15 00:25:57.783994] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:31.739 [2024-05-15 00:25:57.784002] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:31.739 [2024-05-15 00:25:57.784036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:31.739 [2024-05-15 00:25:57.784056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:31.739 [2024-05-15 00:25:57.784077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:31.739 [2024-05-15 00:25:57.784089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:31.739 [2024-05-15 00:25:57.784104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:31.739 [2024-05-15 00:25:57.784116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:31.739 [2024-05-15 00:25:57.784132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:31.739 [2024-05-15 00:25:57.784143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:31.739 [2024-05-15 00:25:57.784163] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:31.739 [2024-05-15 00:25:57.784172] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:31.739 [2024-05-15 00:25:57.784179] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:31.739 [2024-05-15 00:25:57.784185] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:31.739 [2024-05-15 00:25:57.784198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:31.739 [2024-05-15 00:25:57.784211] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:31.739 [2024-05-15 00:25:57.784219] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:31.739 [2024-05-15 00:25:57.784243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:31.739 [2024-05-15 00:25:57.784255] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:31.739 [2024-05-15 00:25:57.784263] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:31.739 [2024-05-15 00:25:57.784272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:31.739 [2024-05-15 00:25:57.784289] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:31.739 [2024-05-15 00:25:57.784297] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:31.739 [2024-05-15 00:25:57.784306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:31.739 [2024-05-15 00:25:57.784318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:31.739 [2024-05-15 00:25:57.784337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:31.739 [2024-05-15 00:25:57.784354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:31.739 [2024-05-15 00:25:57.784369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:31.739 ===================================================== 00:10:31.739 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:31.739 ===================================================== 00:10:31.739 Controller Capabilities/Features 00:10:31.739 ================================ 00:10:31.739 Vendor ID: 4e58 00:10:31.739 Subsystem Vendor ID: 4e58 00:10:31.739 Serial Number: SPDK1 00:10:31.739 Model Number: SPDK bdev Controller 00:10:31.739 Firmware Version: 24.05 00:10:31.739 Recommended Arb Burst: 6 00:10:31.739 IEEE OUI Identifier: 8d 6b 50 00:10:31.739 Multi-path I/O 00:10:31.739 May have multiple subsystem ports: Yes 00:10:31.739 May have multiple controllers: Yes 00:10:31.739 Associated with SR-IOV VF: No 00:10:31.739 Max Data Transfer Size: 131072 00:10:31.739 Max Number of Namespaces: 32 00:10:31.739 Max Number of I/O Queues: 127 00:10:31.739 NVMe Specification Version (VS): 1.3 00:10:31.739 NVMe Specification Version (Identify): 1.3 00:10:31.739 Maximum Queue Entries: 256 00:10:31.739 Contiguous Queues Required: Yes 00:10:31.739 Arbitration Mechanisms Supported 00:10:31.739 Weighted Round Robin: Not Supported 00:10:31.739 Vendor Specific: Not Supported 00:10:31.739 Reset Timeout: 15000 ms 00:10:31.739 Doorbell Stride: 4 bytes 00:10:31.739 NVM Subsystem Reset: Not Supported 00:10:31.739 Command Sets Supported 00:10:31.739 NVM Command Set: Supported 00:10:31.739 Boot Partition: Not Supported 00:10:31.739 Memory Page Size Minimum: 4096 bytes 00:10:31.739 Memory Page Size Maximum: 4096 bytes 00:10:31.739 Persistent Memory Region: Not Supported 00:10:31.739 Optional Asynchronous Events Supported 00:10:31.739 Namespace Attribute Notices: Supported 00:10:31.739 Firmware Activation Notices: Not Supported 00:10:31.739 ANA Change Notices: Not Supported 00:10:31.739 PLE Aggregate Log Change Notices: Not Supported 00:10:31.739 LBA Status Info Alert Notices: Not Supported 00:10:31.739 EGE Aggregate Log Change Notices: Not Supported 00:10:31.739 Normal NVM Subsystem Shutdown event: Not Supported 00:10:31.739 Zone Descriptor Change Notices: Not Supported 00:10:31.739 Discovery Log Change Notices: Not Supported 00:10:31.739 Controller Attributes 00:10:31.739 128-bit Host Identifier: Supported 00:10:31.739 Non-Operational Permissive Mode: Not Supported 00:10:31.739 NVM Sets: Not Supported 00:10:31.739 Read Recovery Levels: Not Supported 00:10:31.739 Endurance Groups: Not Supported 00:10:31.739 Predictable Latency Mode: Not Supported 00:10:31.739 Traffic Based Keep ALive: Not Supported 00:10:31.739 Namespace Granularity: Not Supported 00:10:31.739 SQ Associations: Not Supported 00:10:31.739 UUID List: Not Supported 00:10:31.739 Multi-Domain Subsystem: Not Supported 00:10:31.739 Fixed Capacity Management: Not Supported 00:10:31.739 Variable Capacity Management: Not Supported 00:10:31.739 Delete Endurance Group: Not Supported 00:10:31.739 Delete NVM Set: Not Supported 00:10:31.739 Extended LBA Formats Supported: Not Supported 00:10:31.739 Flexible Data Placement Supported: Not Supported 00:10:31.739 00:10:31.739 Controller Memory Buffer Support 00:10:31.739 ================================ 00:10:31.739 Supported: No 00:10:31.739 00:10:31.739 Persistent Memory Region Support 00:10:31.739 ================================ 00:10:31.739 Supported: No 00:10:31.739 00:10:31.739 Admin Command Set Attributes 00:10:31.739 ============================ 00:10:31.739 Security Send/Receive: Not Supported 00:10:31.739 Format NVM: Not Supported 00:10:31.739 Firmware Activate/Download: Not Supported 00:10:31.739 Namespace Management: Not Supported 00:10:31.739 Device Self-Test: Not Supported 00:10:31.739 Directives: Not Supported 00:10:31.739 NVMe-MI: Not Supported 00:10:31.739 Virtualization Management: Not Supported 00:10:31.739 Doorbell Buffer Config: Not Supported 00:10:31.739 Get LBA Status Capability: Not Supported 00:10:31.739 Command & Feature Lockdown Capability: Not Supported 00:10:31.739 Abort Command Limit: 4 00:10:31.739 Async Event Request Limit: 4 00:10:31.739 Number of Firmware Slots: N/A 00:10:31.739 Firmware Slot 1 Read-Only: N/A 00:10:31.739 Firmware Activation Without Reset: N/A 00:10:31.739 Multiple Update Detection Support: N/A 00:10:31.739 Firmware Update Granularity: No Information Provided 00:10:31.739 Per-Namespace SMART Log: No 00:10:31.739 Asymmetric Namespace Access Log Page: Not Supported 00:10:31.739 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:31.739 Command Effects Log Page: Supported 00:10:31.739 Get Log Page Extended Data: Supported 00:10:31.739 Telemetry Log Pages: Not Supported 00:10:31.739 Persistent Event Log Pages: Not Supported 00:10:31.739 Supported Log Pages Log Page: May Support 00:10:31.739 Commands Supported & Effects Log Page: Not Supported 00:10:31.740 Feature Identifiers & Effects Log Page:May Support 00:10:31.740 NVMe-MI Commands & Effects Log Page: May Support 00:10:31.740 Data Area 4 for Telemetry Log: Not Supported 00:10:31.740 Error Log Page Entries Supported: 128 00:10:31.740 Keep Alive: Supported 00:10:31.740 Keep Alive Granularity: 10000 ms 00:10:31.740 00:10:31.740 NVM Command Set Attributes 00:10:31.740 ========================== 00:10:31.740 Submission Queue Entry Size 00:10:31.740 Max: 64 00:10:31.740 Min: 64 00:10:31.740 Completion Queue Entry Size 00:10:31.740 Max: 16 00:10:31.740 Min: 16 00:10:31.740 Number of Namespaces: 32 00:10:31.740 Compare Command: Supported 00:10:31.740 Write Uncorrectable Command: Not Supported 00:10:31.740 Dataset Management Command: Supported 00:10:31.740 Write Zeroes Command: Supported 00:10:31.740 Set Features Save Field: Not Supported 00:10:31.740 Reservations: Not Supported 00:10:31.740 Timestamp: Not Supported 00:10:31.740 Copy: Supported 00:10:31.740 Volatile Write Cache: Present 00:10:31.740 Atomic Write Unit (Normal): 1 00:10:31.740 Atomic Write Unit (PFail): 1 00:10:31.740 Atomic Compare & Write Unit: 1 00:10:31.740 Fused Compare & Write: Supported 00:10:31.740 Scatter-Gather List 00:10:31.740 SGL Command Set: Supported (Dword aligned) 00:10:31.740 SGL Keyed: Not Supported 00:10:31.740 SGL Bit Bucket Descriptor: Not Supported 00:10:31.740 SGL Metadata Pointer: Not Supported 00:10:31.740 Oversized SGL: Not Supported 00:10:31.740 SGL Metadata Address: Not Supported 00:10:31.740 SGL Offset: Not Supported 00:10:31.740 Transport SGL Data Block: Not Supported 00:10:31.740 Replay Protected Memory Block: Not Supported 00:10:31.740 00:10:31.740 Firmware Slot Information 00:10:31.740 ========================= 00:10:31.740 Active slot: 1 00:10:31.740 Slot 1 Firmware Revision: 24.05 00:10:31.740 00:10:31.740 00:10:31.740 Commands Supported and Effects 00:10:31.740 ============================== 00:10:31.740 Admin Commands 00:10:31.740 -------------- 00:10:31.740 Get Log Page (02h): Supported 00:10:31.740 Identify (06h): Supported 00:10:31.740 Abort (08h): Supported 00:10:31.740 Set Features (09h): Supported 00:10:31.740 Get Features (0Ah): Supported 00:10:31.740 Asynchronous Event Request (0Ch): Supported 00:10:31.740 Keep Alive (18h): Supported 00:10:31.740 I/O Commands 00:10:31.740 ------------ 00:10:31.740 Flush (00h): Supported LBA-Change 00:10:31.740 Write (01h): Supported LBA-Change 00:10:31.740 Read (02h): Supported 00:10:31.740 Compare (05h): Supported 00:10:31.740 Write Zeroes (08h): Supported LBA-Change 00:10:31.740 Dataset Management (09h): Supported LBA-Change 00:10:31.740 Copy (19h): Supported LBA-Change 00:10:31.740 Unknown (79h): Supported LBA-Change 00:10:31.740 Unknown (7Ah): Supported 00:10:31.740 00:10:31.740 Error Log 00:10:31.740 ========= 00:10:31.740 00:10:31.740 Arbitration 00:10:31.740 =========== 00:10:31.740 Arbitration Burst: 1 00:10:31.740 00:10:31.740 Power Management 00:10:31.740 ================ 00:10:31.740 Number of Power States: 1 00:10:31.740 Current Power State: Power State #0 00:10:31.740 Power State #0: 00:10:31.740 Max Power: 0.00 W 00:10:31.740 Non-Operational State: Operational 00:10:31.740 Entry Latency: Not Reported 00:10:31.740 Exit Latency: Not Reported 00:10:31.740 Relative Read Throughput: 0 00:10:31.740 Relative Read Latency: 0 00:10:31.740 Relative Write Throughput: 0 00:10:31.740 Relative Write Latency: 0 00:10:31.740 Idle Power: Not Reported 00:10:31.740 Active Power: Not Reported 00:10:31.740 Non-Operational Permissive Mode: Not Supported 00:10:31.740 00:10:31.740 Health Information 00:10:31.740 ================== 00:10:31.740 Critical Warnings: 00:10:31.740 Available Spare Space: OK 00:10:31.740 Temperature: OK 00:10:31.740 Device Reliability: OK 00:10:31.740 Read Only: No 00:10:31.740 Volatile Memory Backup: OK 00:10:31.740 Current Temperature: 0 Kelvin (-2[2024-05-15 00:25:57.784495] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:31.740 [2024-05-15 00:25:57.784511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:31.740 [2024-05-15 00:25:57.784553] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:31.740 [2024-05-15 00:25:57.784570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.740 [2024-05-15 00:25:57.784582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.740 [2024-05-15 00:25:57.784592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.740 [2024-05-15 00:25:57.784601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:31.740 [2024-05-15 00:25:57.785052] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:31.740 [2024-05-15 00:25:57.785076] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:31.740 [2024-05-15 00:25:57.786049] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:31.740 [2024-05-15 00:25:57.786124] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:31.740 [2024-05-15 00:25:57.786140] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:31.740 [2024-05-15 00:25:57.787060] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:31.740 [2024-05-15 00:25:57.787087] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:31.740 [2024-05-15 00:25:57.787147] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:31.740 [2024-05-15 00:25:57.790942] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:31.740 73 Celsius) 00:10:31.740 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:31.740 Available Spare: 0% 00:10:31.740 Available Spare Threshold: 0% 00:10:31.740 Life Percentage Used: 0% 00:10:31.740 Data Units Read: 0 00:10:31.740 Data Units Written: 0 00:10:31.740 Host Read Commands: 0 00:10:31.740 Host Write Commands: 0 00:10:31.740 Controller Busy Time: 0 minutes 00:10:31.740 Power Cycles: 0 00:10:31.740 Power On Hours: 0 hours 00:10:31.740 Unsafe Shutdowns: 0 00:10:31.740 Unrecoverable Media Errors: 0 00:10:31.740 Lifetime Error Log Entries: 0 00:10:31.740 Warning Temperature Time: 0 minutes 00:10:31.740 Critical Temperature Time: 0 minutes 00:10:31.740 00:10:31.740 Number of Queues 00:10:31.740 ================ 00:10:31.740 Number of I/O Submission Queues: 127 00:10:31.740 Number of I/O Completion Queues: 127 00:10:31.740 00:10:31.740 Active Namespaces 00:10:31.740 ================= 00:10:31.740 Namespace ID:1 00:10:31.740 Error Recovery Timeout: Unlimited 00:10:31.740 Command Set Identifier: NVM (00h) 00:10:31.740 Deallocate: Supported 00:10:31.740 Deallocated/Unwritten Error: Not Supported 00:10:31.740 Deallocated Read Value: Unknown 00:10:31.740 Deallocate in Write Zeroes: Not Supported 00:10:31.740 Deallocated Guard Field: 0xFFFF 00:10:31.740 Flush: Supported 00:10:31.740 Reservation: Supported 00:10:31.740 Namespace Sharing Capabilities: Multiple Controllers 00:10:31.740 Size (in LBAs): 131072 (0GiB) 00:10:31.740 Capacity (in LBAs): 131072 (0GiB) 00:10:31.740 Utilization (in LBAs): 131072 (0GiB) 00:10:31.740 NGUID: 9240B32552B248F887C27FDE0BB9964C 00:10:31.740 UUID: 9240b325-52b2-48f8-87c2-7fde0bb9964c 00:10:31.740 Thin Provisioning: Not Supported 00:10:31.740 Per-NS Atomic Units: Yes 00:10:31.740 Atomic Boundary Size (Normal): 0 00:10:31.740 Atomic Boundary Size (PFail): 0 00:10:31.740 Atomic Boundary Offset: 0 00:10:31.740 Maximum Single Source Range Length: 65535 00:10:31.740 Maximum Copy Length: 65535 00:10:31.740 Maximum Source Range Count: 1 00:10:31.740 NGUID/EUI64 Never Reused: No 00:10:31.740 Namespace Write Protected: No 00:10:31.740 Number of LBA Formats: 1 00:10:31.740 Current LBA Format: LBA Format #00 00:10:31.740 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:31.740 00:10:31.740 00:25:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:31.740 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.999 [2024-05-15 00:25:58.021754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:37.265 Initializing NVMe Controllers 00:10:37.265 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:37.265 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:37.265 Initialization complete. Launching workers. 00:10:37.265 ======================================================== 00:10:37.265 Latency(us) 00:10:37.265 Device Information : IOPS MiB/s Average min max 00:10:37.265 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34018.90 132.89 3761.89 1179.42 9268.73 00:10:37.265 ======================================================== 00:10:37.265 Total : 34018.90 132.89 3761.89 1179.42 9268.73 00:10:37.265 00:10:37.265 [2024-05-15 00:26:03.046402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:37.265 00:26:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:37.265 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.265 [2024-05-15 00:26:03.288589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:42.524 Initializing NVMe Controllers 00:10:42.524 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:42.524 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:42.524 Initialization complete. Launching workers. 00:10:42.524 ======================================================== 00:10:42.524 Latency(us) 00:10:42.524 Device Information : IOPS MiB/s Average min max 00:10:42.524 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.18 62.65 7986.21 6003.59 11971.51 00:10:42.524 ======================================================== 00:10:42.524 Total : 16038.18 62.65 7986.21 6003.59 11971.51 00:10:42.524 00:10:42.524 [2024-05-15 00:26:08.325605] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:42.524 00:26:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:42.524 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.524 [2024-05-15 00:26:08.561811] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:47.787 [2024-05-15 00:26:13.641332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:47.787 Initializing NVMe Controllers 00:10:47.787 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:47.787 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:47.787 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:47.787 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:47.787 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:47.787 Initialization complete. Launching workers. 00:10:47.787 Starting thread on core 2 00:10:47.787 Starting thread on core 3 00:10:47.787 Starting thread on core 1 00:10:47.787 00:26:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:47.787 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.045 [2024-05-15 00:26:13.968411] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:51.331 [2024-05-15 00:26:17.028247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:51.331 Initializing NVMe Controllers 00:10:51.331 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:51.331 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:51.331 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:51.331 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:51.331 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:51.331 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:51.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:51.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:51.331 Initialization complete. Launching workers. 00:10:51.331 Starting thread on core 1 with urgent priority queue 00:10:51.331 Starting thread on core 2 with urgent priority queue 00:10:51.331 Starting thread on core 3 with urgent priority queue 00:10:51.331 Starting thread on core 0 with urgent priority queue 00:10:51.332 SPDK bdev Controller (SPDK1 ) core 0: 3506.33 IO/s 28.52 secs/100000 ios 00:10:51.332 SPDK bdev Controller (SPDK1 ) core 1: 3602.00 IO/s 27.76 secs/100000 ios 00:10:51.332 SPDK bdev Controller (SPDK1 ) core 2: 3557.67 IO/s 28.11 secs/100000 ios 00:10:51.332 SPDK bdev Controller (SPDK1 ) core 3: 3781.00 IO/s 26.45 secs/100000 ios 00:10:51.332 ======================================================== 00:10:51.332 00:10:51.332 00:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:51.332 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.332 [2024-05-15 00:26:17.337131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:51.332 Initializing NVMe Controllers 00:10:51.332 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:51.332 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:51.332 Namespace ID: 1 size: 0GB 00:10:51.332 Initialization complete. 00:10:51.332 INFO: using host memory buffer for IO 00:10:51.332 Hello world! 00:10:51.332 [2024-05-15 00:26:17.372757] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:51.332 00:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:51.332 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.590 [2024-05-15 00:26:17.667475] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:52.525 Initializing NVMe Controllers 00:10:52.526 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:52.526 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:52.526 Initialization complete. Launching workers. 00:10:52.526 submit (in ns) avg, min, max = 8265.0, 3504.4, 4016095.6 00:10:52.526 complete (in ns) avg, min, max = 26242.4, 2067.8, 4024657.8 00:10:52.526 00:10:52.526 Submit histogram 00:10:52.526 ================ 00:10:52.526 Range in us Cumulative Count 00:10:52.526 3.484 - 3.508: 0.0154% ( 2) 00:10:52.526 3.508 - 3.532: 0.3694% ( 46) 00:10:52.526 3.532 - 3.556: 1.3546% ( 128) 00:10:52.526 3.556 - 3.579: 4.1561% ( 364) 00:10:52.526 3.579 - 3.603: 8.7509% ( 597) 00:10:52.526 3.603 - 3.627: 16.0240% ( 945) 00:10:52.526 3.627 - 3.650: 23.5896% ( 983) 00:10:52.526 3.650 - 3.674: 32.2635% ( 1127) 00:10:52.526 3.674 - 3.698: 40.2063% ( 1032) 00:10:52.526 3.698 - 3.721: 48.1182% ( 1028) 00:10:52.526 3.721 - 3.745: 53.4288% ( 690) 00:10:52.526 3.745 - 3.769: 57.6618% ( 550) 00:10:52.526 3.769 - 3.793: 60.9867% ( 432) 00:10:52.526 3.793 - 3.816: 64.4270% ( 447) 00:10:52.526 3.816 - 3.840: 68.0289% ( 468) 00:10:52.526 3.840 - 3.864: 72.0080% ( 517) 00:10:52.526 3.864 - 3.887: 75.8870% ( 504) 00:10:52.526 3.887 - 3.911: 79.7891% ( 507) 00:10:52.526 3.911 - 3.935: 83.4372% ( 474) 00:10:52.526 3.935 - 3.959: 85.6923% ( 293) 00:10:52.526 3.959 - 3.982: 87.8473% ( 280) 00:10:52.526 3.982 - 4.006: 89.2865% ( 187) 00:10:52.526 4.006 - 4.030: 90.5488% ( 164) 00:10:52.526 4.030 - 4.053: 91.5108% ( 125) 00:10:52.526 4.053 - 4.077: 92.3728% ( 112) 00:10:52.526 4.077 - 4.101: 93.1579% ( 102) 00:10:52.526 4.101 - 4.124: 93.9429% ( 102) 00:10:52.526 4.124 - 4.148: 94.6664% ( 94) 00:10:52.526 4.148 - 4.172: 95.2744% ( 79) 00:10:52.526 4.172 - 4.196: 95.6900% ( 54) 00:10:52.526 4.196 - 4.219: 95.9671% ( 36) 00:10:52.526 4.219 - 4.243: 96.2056% ( 31) 00:10:52.526 4.243 - 4.267: 96.3827% ( 23) 00:10:52.526 4.267 - 4.290: 96.5520% ( 22) 00:10:52.526 4.290 - 4.314: 96.7213% ( 22) 00:10:52.526 4.314 - 4.338: 96.8214% ( 13) 00:10:52.526 4.338 - 4.361: 96.9830% ( 21) 00:10:52.526 4.361 - 4.385: 97.0600% ( 10) 00:10:52.526 4.385 - 4.409: 97.0907% ( 4) 00:10:52.526 4.409 - 4.433: 97.1523% ( 8) 00:10:52.526 4.433 - 4.456: 97.2062% ( 7) 00:10:52.526 4.456 - 4.480: 97.2755% ( 9) 00:10:52.526 4.480 - 4.504: 97.3293% ( 7) 00:10:52.526 4.504 - 4.527: 97.3370% ( 1) 00:10:52.526 4.527 - 4.551: 97.3447% ( 1) 00:10:52.526 4.551 - 4.575: 97.3755% ( 4) 00:10:52.526 4.575 - 4.599: 97.3832% ( 1) 00:10:52.526 4.599 - 4.622: 97.3986% ( 2) 00:10:52.526 4.622 - 4.646: 97.4294% ( 4) 00:10:52.526 4.646 - 4.670: 97.4371% ( 1) 00:10:52.526 4.670 - 4.693: 97.4756% ( 5) 00:10:52.526 4.693 - 4.717: 97.4987% ( 3) 00:10:52.526 4.717 - 4.741: 97.5140% ( 2) 00:10:52.526 4.741 - 4.764: 97.5448% ( 4) 00:10:52.526 4.764 - 4.788: 97.5756% ( 4) 00:10:52.526 4.788 - 4.812: 97.6064% ( 4) 00:10:52.526 4.812 - 4.836: 97.6449% ( 5) 00:10:52.526 4.836 - 4.859: 97.6834% ( 5) 00:10:52.526 4.859 - 4.883: 97.7065% ( 3) 00:10:52.526 4.883 - 4.907: 97.7142% ( 1) 00:10:52.526 4.907 - 4.930: 97.7680% ( 7) 00:10:52.526 4.930 - 4.954: 97.8142% ( 6) 00:10:52.526 4.954 - 4.978: 97.8604% ( 6) 00:10:52.526 4.978 - 5.001: 97.8835% ( 3) 00:10:52.526 5.001 - 5.025: 97.9143% ( 4) 00:10:52.526 5.025 - 5.049: 97.9527% ( 5) 00:10:52.526 5.049 - 5.073: 97.9758% ( 3) 00:10:52.526 5.073 - 5.096: 98.0297% ( 7) 00:10:52.526 5.096 - 5.120: 98.0836% ( 7) 00:10:52.526 5.120 - 5.144: 98.0913% ( 1) 00:10:52.526 5.144 - 5.167: 98.1298% ( 5) 00:10:52.526 5.167 - 5.191: 98.1375% ( 1) 00:10:52.526 5.191 - 5.215: 98.1452% ( 1) 00:10:52.526 5.215 - 5.239: 98.1529% ( 1) 00:10:52.526 5.239 - 5.262: 98.1605% ( 1) 00:10:52.526 5.262 - 5.286: 98.1913% ( 4) 00:10:52.526 5.286 - 5.310: 98.1990% ( 1) 00:10:52.526 5.310 - 5.333: 98.2144% ( 2) 00:10:52.526 5.357 - 5.381: 98.2221% ( 1) 00:10:52.526 5.381 - 5.404: 98.2375% ( 2) 00:10:52.526 5.404 - 5.428: 98.2452% ( 1) 00:10:52.526 5.428 - 5.452: 98.2529% ( 1) 00:10:52.526 5.452 - 5.476: 98.2683% ( 2) 00:10:52.526 5.499 - 5.523: 98.2760% ( 1) 00:10:52.526 5.570 - 5.594: 98.2837% ( 1) 00:10:52.526 5.594 - 5.618: 98.2914% ( 1) 00:10:52.526 5.713 - 5.736: 98.2991% ( 1) 00:10:52.526 5.736 - 5.760: 98.3145% ( 2) 00:10:52.526 5.784 - 5.807: 98.3222% ( 1) 00:10:52.526 5.831 - 5.855: 98.3299% ( 1) 00:10:52.526 5.855 - 5.879: 98.3376% ( 1) 00:10:52.526 6.068 - 6.116: 98.3453% ( 1) 00:10:52.526 6.163 - 6.210: 98.3530% ( 1) 00:10:52.526 6.874 - 6.921: 98.3684% ( 2) 00:10:52.526 7.064 - 7.111: 98.3760% ( 1) 00:10:52.526 7.111 - 7.159: 98.3837% ( 1) 00:10:52.526 7.253 - 7.301: 98.3991% ( 2) 00:10:52.526 7.301 - 7.348: 98.4068% ( 1) 00:10:52.526 7.396 - 7.443: 98.4145% ( 1) 00:10:52.526 7.443 - 7.490: 98.4299% ( 2) 00:10:52.526 7.490 - 7.538: 98.4376% ( 1) 00:10:52.526 7.538 - 7.585: 98.4453% ( 1) 00:10:52.526 7.870 - 7.917: 98.4607% ( 2) 00:10:52.526 7.917 - 7.964: 98.4684% ( 1) 00:10:52.526 7.964 - 8.012: 98.4761% ( 1) 00:10:52.526 8.059 - 8.107: 98.4838% ( 1) 00:10:52.526 8.107 - 8.154: 98.5069% ( 3) 00:10:52.526 8.154 - 8.201: 98.5146% ( 1) 00:10:52.526 8.201 - 8.249: 98.5300% ( 2) 00:10:52.526 8.249 - 8.296: 98.5454% ( 2) 00:10:52.526 8.344 - 8.391: 98.5531% ( 1) 00:10:52.526 8.439 - 8.486: 98.5685% ( 2) 00:10:52.526 8.533 - 8.581: 98.5762% ( 1) 00:10:52.526 8.581 - 8.628: 98.5839% ( 1) 00:10:52.526 8.628 - 8.676: 98.5992% ( 2) 00:10:52.526 8.676 - 8.723: 98.6069% ( 1) 00:10:52.526 8.723 - 8.770: 98.6146% ( 1) 00:10:52.526 8.770 - 8.818: 98.6223% ( 1) 00:10:52.526 8.818 - 8.865: 98.6300% ( 1) 00:10:52.526 8.913 - 8.960: 98.6377% ( 1) 00:10:52.526 8.960 - 9.007: 98.6454% ( 1) 00:10:52.526 9.007 - 9.055: 98.6685% ( 3) 00:10:52.526 9.055 - 9.102: 98.6839% ( 2) 00:10:52.526 9.102 - 9.150: 98.6993% ( 2) 00:10:52.526 9.292 - 9.339: 98.7147% ( 2) 00:10:52.526 9.387 - 9.434: 98.7224% ( 1) 00:10:52.526 9.434 - 9.481: 98.7301% ( 1) 00:10:52.526 9.671 - 9.719: 98.7455% ( 2) 00:10:52.526 9.719 - 9.766: 98.7532% ( 1) 00:10:52.526 9.813 - 9.861: 98.7609% ( 1) 00:10:52.526 9.861 - 9.908: 98.7686% ( 1) 00:10:52.526 10.050 - 10.098: 98.7763% ( 1) 00:10:52.526 10.287 - 10.335: 98.7840% ( 1) 00:10:52.526 10.335 - 10.382: 98.7994% ( 2) 00:10:52.526 10.477 - 10.524: 98.8070% ( 1) 00:10:52.526 10.809 - 10.856: 98.8147% ( 1) 00:10:52.526 10.951 - 10.999: 98.8224% ( 1) 00:10:52.526 11.141 - 11.188: 98.8301% ( 1) 00:10:52.526 11.188 - 11.236: 98.8378% ( 1) 00:10:52.526 11.236 - 11.283: 98.8455% ( 1) 00:10:52.526 11.520 - 11.567: 98.8532% ( 1) 00:10:52.526 11.852 - 11.899: 98.8609% ( 1) 00:10:52.526 11.947 - 11.994: 98.8686% ( 1) 00:10:52.526 12.089 - 12.136: 98.8763% ( 1) 00:10:52.526 12.231 - 12.326: 98.8840% ( 1) 00:10:52.527 12.516 - 12.610: 98.8917% ( 1) 00:10:52.527 12.800 - 12.895: 98.9071% ( 2) 00:10:52.527 12.895 - 12.990: 98.9148% ( 1) 00:10:52.527 12.990 - 13.084: 98.9225% ( 1) 00:10:52.527 13.179 - 13.274: 98.9302% ( 1) 00:10:52.527 13.274 - 13.369: 98.9379% ( 1) 00:10:52.527 13.369 - 13.464: 98.9533% ( 2) 00:10:52.527 13.464 - 13.559: 98.9687% ( 2) 00:10:52.527 13.559 - 13.653: 98.9764% ( 1) 00:10:52.527 13.653 - 13.748: 98.9841% ( 1) 00:10:52.527 13.843 - 13.938: 98.9918% ( 1) 00:10:52.527 14.033 - 14.127: 98.9995% ( 1) 00:10:52.527 14.412 - 14.507: 99.0226% ( 3) 00:10:52.527 14.696 - 14.791: 99.0302% ( 1) 00:10:52.527 14.981 - 15.076: 99.0379% ( 1) 00:10:52.527 15.929 - 16.024: 99.0456% ( 1) 00:10:52.527 17.067 - 17.161: 99.0533% ( 1) 00:10:52.527 17.161 - 17.256: 99.0610% ( 1) 00:10:52.527 17.256 - 17.351: 99.0764% ( 2) 00:10:52.527 17.351 - 17.446: 99.0918% ( 2) 00:10:52.527 17.446 - 17.541: 99.1226% ( 4) 00:10:52.527 17.541 - 17.636: 99.1457% ( 3) 00:10:52.527 17.636 - 17.730: 99.1688% ( 3) 00:10:52.527 17.730 - 17.825: 99.2457% ( 10) 00:10:52.527 17.825 - 17.920: 99.2919% ( 6) 00:10:52.527 17.920 - 18.015: 99.3381% ( 6) 00:10:52.527 18.015 - 18.110: 99.3612% ( 3) 00:10:52.527 18.110 - 18.204: 99.4228% ( 8) 00:10:52.527 18.204 - 18.299: 99.4920% ( 9) 00:10:52.527 18.299 - 18.394: 99.5536% ( 8) 00:10:52.527 18.394 - 18.489: 99.6152% ( 8) 00:10:52.527 18.489 - 18.584: 99.6614% ( 6) 00:10:52.527 18.584 - 18.679: 99.7152% ( 7) 00:10:52.527 18.679 - 18.773: 99.7614% ( 6) 00:10:52.527 18.773 - 18.868: 99.7691% ( 1) 00:10:52.527 18.963 - 19.058: 99.7768% ( 1) 00:10:52.527 19.058 - 19.153: 99.7845% ( 1) 00:10:52.527 19.153 - 19.247: 99.7922% ( 1) 00:10:52.527 19.247 - 19.342: 99.8076% ( 2) 00:10:52.527 19.532 - 19.627: 99.8230% ( 2) 00:10:52.527 19.911 - 20.006: 99.8307% ( 1) 00:10:52.527 20.575 - 20.670: 99.8384% ( 1) 00:10:52.527 21.902 - 21.997: 99.8461% ( 1) 00:10:52.527 22.376 - 22.471: 99.8538% ( 1) 00:10:52.527 22.566 - 22.661: 99.8615% ( 1) 00:10:52.527 24.083 - 24.178: 99.8692% ( 1) 00:10:52.527 25.410 - 25.600: 99.8769% ( 1) 00:10:52.527 27.307 - 27.496: 99.8846% ( 1) 00:10:52.527 28.065 - 28.255: 99.8922% ( 1) 00:10:52.527 3980.705 - 4004.978: 99.9692% ( 10) 00:10:52.527 4004.978 - 4029.250: 100.0000% ( 4) 00:10:52.527 00:10:52.527 Complete histogram 00:10:52.527 ================== 00:10:52.527 Range in us Cumulative Count 00:10:52.527 2.062 - 2.074: 1.6086% ( 209) 00:10:52.527 2.074 - 2.086: 27.4148% ( 3353) 00:10:52.527 2.086 - 2.098: 32.7099% ( 688) 00:10:52.527 2.098 - 2.110: 40.6373% ( 1030) 00:10:52.527 2.110 - 2.121: 57.6310% ( 2208) 00:10:52.527 2.121 - 2.133: 59.2704% ( 213) 00:10:52.527 2.133 - 2.145: 63.9498% ( 608) 00:10:52.527 2.145 - 2.157: 71.2999% ( 955) 00:10:52.527 2.157 - 2.169: 72.3390% ( 135) 00:10:52.527 2.169 - 2.181: 76.4719% ( 537) 00:10:52.527 2.181 - 2.193: 80.6434% ( 542) 00:10:52.527 2.193 - 2.204: 81.2130% ( 74) 00:10:52.527 2.204 - 2.216: 83.2602% ( 266) 00:10:52.527 2.216 - 2.228: 87.7549% ( 584) 00:10:52.527 2.228 - 2.240: 88.7863% ( 134) 00:10:52.527 2.240 - 2.252: 90.5411% ( 228) 00:10:52.527 2.252 - 2.264: 92.8731% ( 303) 00:10:52.527 2.264 - 2.276: 93.1502% ( 36) 00:10:52.527 2.276 - 2.287: 93.8121% ( 86) 00:10:52.527 2.287 - 2.299: 94.8665% ( 137) 00:10:52.527 2.299 - 2.311: 95.0974% ( 30) 00:10:52.527 2.311 - 2.323: 95.2128% ( 15) 00:10:52.527 2.323 - 2.335: 95.3129% ( 13) 00:10:52.527 2.335 - 2.347: 95.3975% ( 11) 00:10:52.527 2.347 - 2.359: 95.4822% ( 11) 00:10:52.527 2.359 - 2.370: 95.8131% ( 43) 00:10:52.527 2.370 - 2.382: 96.0286% ( 28) 00:10:52.527 2.382 - 2.394: 96.1287% ( 13) 00:10:52.527 2.394 - 2.406: 96.3134% ( 24) 00:10:52.527 2.406 - 2.418: 96.4673% ( 20) 00:10:52.527 2.418 - 2.430: 96.6982% ( 30) 00:10:52.527 2.430 - 2.441: 96.9368% ( 31) 00:10:52.527 2.441 - 2.453: 97.1369% ( 26) 00:10:52.527 2.453 - 2.465: 97.2908% ( 20) 00:10:52.527 2.465 - 2.477: 97.4679% ( 23) 00:10:52.527 2.477 - 2.489: 97.6372% ( 22) 00:10:52.527 2.489 - 2.501: 97.8065% ( 22) 00:10:52.527 2.501 - 2.513: 97.8989% ( 12) 00:10:52.527 2.513 - 2.524: 98.0451% ( 19) 00:10:52.527 2.524 - 2.536: 98.0990% ( 7) 00:10:52.527 2.536 - 2.548: 98.2221% ( 16) 00:10:52.527 2.548 - 2.560: 98.3145% ( 12) 00:10:52.527 2.560 - 2.572: 98.3299% ( 2) 00:10:52.527 2.572 - 2.584: 98.3453% ( 2) 00:10:52.527 2.584 - 2.596: 98.3914% ( 6) 00:10:52.527 2.596 - 2.607: 98.4145% ( 3) 00:10:52.527 2.607 - 2.619: 98.4453% ( 4) 00:10:52.527 2.619 - 2.631: 98.4530% ( 1) 00:10:52.527 2.631 - 2.643: 98.4838% ( 4) 00:10:52.527 2.667 - 2.679: 98.4915% ( 1) 00:10:52.527 2.726 - 2.738: 98.4992% ( 1) 00:10:52.527 2.738 - 2.750: 98.5069% ( 1) 00:10:52.527 2.761 - 2.773: 98.5146% ( 1) 00:10:52.527 2.844 - 2.856: 98.5223% ( 1) 00:10:52.527 2.951 - 2.963: 98.5300% ( 1) 00:10:52.527 2.999 - 3.010: 9[2024-05-15 00:26:18.686596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:52.815 8.5377% ( 1) 00:10:52.815 3.200 - 3.224: 98.5454% ( 1) 00:10:52.815 3.319 - 3.342: 98.5531% ( 1) 00:10:52.815 3.366 - 3.390: 98.5608% ( 1) 00:10:52.815 3.437 - 3.461: 98.5685% ( 1) 00:10:52.815 3.461 - 3.484: 98.5762% ( 1) 00:10:52.815 3.508 - 3.532: 98.5839% ( 1) 00:10:52.815 3.532 - 3.556: 98.5992% ( 2) 00:10:52.815 3.579 - 3.603: 98.6069% ( 1) 00:10:52.815 3.603 - 3.627: 98.6146% ( 1) 00:10:52.815 3.650 - 3.674: 98.6300% ( 2) 00:10:52.815 3.698 - 3.721: 98.6377% ( 1) 00:10:52.815 3.721 - 3.745: 98.6454% ( 1) 00:10:52.815 3.745 - 3.769: 98.6531% ( 1) 00:10:52.815 3.769 - 3.793: 98.6762% ( 3) 00:10:52.815 3.793 - 3.816: 98.6839% ( 1) 00:10:52.815 3.911 - 3.935: 98.6916% ( 1) 00:10:52.815 3.982 - 4.006: 98.6993% ( 1) 00:10:52.815 4.006 - 4.030: 98.7147% ( 2) 00:10:52.815 4.053 - 4.077: 98.7224% ( 1) 00:10:52.815 4.219 - 4.243: 98.7301% ( 1) 00:10:52.815 5.025 - 5.049: 98.7378% ( 1) 00:10:52.815 5.167 - 5.191: 98.7455% ( 1) 00:10:52.815 5.404 - 5.428: 98.7532% ( 1) 00:10:52.815 5.476 - 5.499: 98.7609% ( 1) 00:10:52.815 5.641 - 5.665: 98.7686% ( 1) 00:10:52.815 5.736 - 5.760: 98.7763% ( 1) 00:10:52.815 5.807 - 5.831: 98.7840% ( 1) 00:10:52.815 5.950 - 5.973: 98.7917% ( 1) 00:10:52.815 6.163 - 6.210: 98.7994% ( 1) 00:10:52.815 6.210 - 6.258: 98.8070% ( 1) 00:10:52.815 6.258 - 6.305: 98.8224% ( 2) 00:10:52.815 6.590 - 6.637: 98.8301% ( 1) 00:10:52.815 6.874 - 6.921: 98.8378% ( 1) 00:10:52.815 6.969 - 7.016: 98.8455% ( 1) 00:10:52.815 7.301 - 7.348: 98.8532% ( 1) 00:10:52.815 7.443 - 7.490: 98.8609% ( 1) 00:10:52.815 7.680 - 7.727: 98.8686% ( 1) 00:10:52.815 8.249 - 8.296: 98.8763% ( 1) 00:10:52.815 8.486 - 8.533: 98.8840% ( 1) 00:10:52.815 10.145 - 10.193: 98.8917% ( 1) 00:10:52.815 11.425 - 11.473: 98.8994% ( 1) 00:10:52.815 13.843 - 13.938: 98.9071% ( 1) 00:10:52.815 14.791 - 14.886: 98.9148% ( 1) 00:10:52.815 15.550 - 15.644: 98.9225% ( 1) 00:10:52.815 15.644 - 15.739: 98.9379% ( 2) 00:10:52.815 15.739 - 15.834: 98.9456% ( 1) 00:10:52.815 15.834 - 15.929: 98.9533% ( 1) 00:10:52.815 15.929 - 16.024: 98.9687% ( 2) 00:10:52.815 16.024 - 16.119: 98.9918% ( 3) 00:10:52.815 16.119 - 16.213: 99.0226% ( 4) 00:10:52.815 16.213 - 16.308: 99.0379% ( 2) 00:10:52.815 16.308 - 16.403: 99.0610% ( 3) 00:10:52.815 16.403 - 16.498: 99.1072% ( 6) 00:10:52.815 16.498 - 16.593: 99.1611% ( 7) 00:10:52.815 16.593 - 16.687: 99.2150% ( 7) 00:10:52.815 16.687 - 16.782: 99.2381% ( 3) 00:10:52.815 16.782 - 16.877: 99.2765% ( 5) 00:10:52.815 16.877 - 16.972: 99.2842% ( 1) 00:10:52.815 16.972 - 17.067: 99.2919% ( 1) 00:10:52.815 17.067 - 17.161: 99.3073% ( 2) 00:10:52.815 17.256 - 17.351: 99.3150% ( 1) 00:10:52.815 17.351 - 17.446: 99.3227% ( 1) 00:10:52.815 17.446 - 17.541: 99.3304% ( 1) 00:10:52.815 17.541 - 17.636: 99.3381% ( 1) 00:10:52.815 17.636 - 17.730: 99.3458% ( 1) 00:10:52.815 17.730 - 17.825: 99.3535% ( 1) 00:10:52.815 17.920 - 18.015: 99.3689% ( 2) 00:10:52.815 18.204 - 18.299: 99.3843% ( 2) 00:10:52.815 18.299 - 18.394: 99.3920% ( 1) 00:10:52.815 18.679 - 18.773: 99.3997% ( 1) 00:10:52.815 3980.705 - 4004.978: 99.8846% ( 63) 00:10:52.815 4004.978 - 4029.250: 100.0000% ( 15) 00:10:52.815 00:10:52.815 00:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:52.815 00:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:52.815 00:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:52.815 00:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:52.815 00:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:52.815 [ 00:10:52.815 { 00:10:52.815 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:52.815 "subtype": "Discovery", 00:10:52.815 "listen_addresses": [], 00:10:52.815 "allow_any_host": true, 00:10:52.815 "hosts": [] 00:10:52.815 }, 00:10:52.815 { 00:10:52.815 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:52.815 "subtype": "NVMe", 00:10:52.815 "listen_addresses": [ 00:10:52.815 { 00:10:52.815 "trtype": "VFIOUSER", 00:10:52.815 "adrfam": "IPv4", 00:10:52.815 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:52.815 "trsvcid": "0" 00:10:52.815 } 00:10:52.815 ], 00:10:52.815 "allow_any_host": true, 00:10:52.815 "hosts": [], 00:10:52.815 "serial_number": "SPDK1", 00:10:52.815 "model_number": "SPDK bdev Controller", 00:10:52.815 "max_namespaces": 32, 00:10:52.815 "min_cntlid": 1, 00:10:52.815 "max_cntlid": 65519, 00:10:52.815 "namespaces": [ 00:10:52.815 { 00:10:52.815 "nsid": 1, 00:10:52.815 "bdev_name": "Malloc1", 00:10:52.816 "name": "Malloc1", 00:10:52.816 "nguid": "9240B32552B248F887C27FDE0BB9964C", 00:10:52.816 "uuid": "9240b325-52b2-48f8-87c2-7fde0bb9964c" 00:10:52.816 } 00:10:52.816 ] 00:10:52.816 }, 00:10:52.816 { 00:10:52.816 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:52.816 "subtype": "NVMe", 00:10:52.816 "listen_addresses": [ 00:10:52.816 { 00:10:52.816 "trtype": "VFIOUSER", 00:10:52.816 "adrfam": "IPv4", 00:10:52.816 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:52.816 "trsvcid": "0" 00:10:52.816 } 00:10:52.816 ], 00:10:52.816 "allow_any_host": true, 00:10:52.816 "hosts": [], 00:10:52.816 "serial_number": "SPDK2", 00:10:52.816 "model_number": "SPDK bdev Controller", 00:10:52.816 "max_namespaces": 32, 00:10:52.816 "min_cntlid": 1, 00:10:52.816 "max_cntlid": 65519, 00:10:52.816 "namespaces": [ 00:10:52.816 { 00:10:52.816 "nsid": 1, 00:10:52.816 "bdev_name": "Malloc2", 00:10:52.816 "name": "Malloc2", 00:10:52.816 "nguid": "B3CEC35C0BA04B91A424E887751F8FE6", 00:10:52.816 "uuid": "b3cec35c-0ba0-4b91-a424-e887751f8fe6" 00:10:52.816 } 00:10:52.816 ] 00:10:52.816 } 00:10:52.816 ] 00:10:53.078 00:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:53.078 00:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=822862 00:10:53.078 00:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:53.078 00:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:53.078 00:26:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:10:53.078 00:26:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:53.078 00:26:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:53.078 00:26:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:10:53.078 00:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:53.078 00:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:53.078 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.078 [2024-05-15 00:26:19.138439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:53.078 Malloc3 00:10:53.336 00:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:53.336 [2024-05-15 00:26:19.475993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:53.336 00:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:53.594 Asynchronous Event Request test 00:10:53.594 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:53.594 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:53.594 Registering asynchronous event callbacks... 00:10:53.594 Starting namespace attribute notice tests for all controllers... 00:10:53.594 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:53.594 aer_cb - Changed Namespace 00:10:53.594 Cleaning up... 00:10:53.594 [ 00:10:53.594 { 00:10:53.594 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:53.594 "subtype": "Discovery", 00:10:53.594 "listen_addresses": [], 00:10:53.594 "allow_any_host": true, 00:10:53.594 "hosts": [] 00:10:53.594 }, 00:10:53.594 { 00:10:53.594 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:53.594 "subtype": "NVMe", 00:10:53.594 "listen_addresses": [ 00:10:53.594 { 00:10:53.594 "trtype": "VFIOUSER", 00:10:53.594 "adrfam": "IPv4", 00:10:53.594 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:53.594 "trsvcid": "0" 00:10:53.594 } 00:10:53.594 ], 00:10:53.594 "allow_any_host": true, 00:10:53.594 "hosts": [], 00:10:53.594 "serial_number": "SPDK1", 00:10:53.594 "model_number": "SPDK bdev Controller", 00:10:53.594 "max_namespaces": 32, 00:10:53.594 "min_cntlid": 1, 00:10:53.594 "max_cntlid": 65519, 00:10:53.594 "namespaces": [ 00:10:53.594 { 00:10:53.594 "nsid": 1, 00:10:53.594 "bdev_name": "Malloc1", 00:10:53.594 "name": "Malloc1", 00:10:53.594 "nguid": "9240B32552B248F887C27FDE0BB9964C", 00:10:53.594 "uuid": "9240b325-52b2-48f8-87c2-7fde0bb9964c" 00:10:53.594 }, 00:10:53.594 { 00:10:53.594 "nsid": 2, 00:10:53.594 "bdev_name": "Malloc3", 00:10:53.594 "name": "Malloc3", 00:10:53.594 "nguid": "C1F9536121244CAC85E0A278E4A4D04C", 00:10:53.594 "uuid": "c1f95361-2124-4cac-85e0-a278e4a4d04c" 00:10:53.594 } 00:10:53.594 ] 00:10:53.594 }, 00:10:53.594 { 00:10:53.594 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:53.594 "subtype": "NVMe", 00:10:53.594 "listen_addresses": [ 00:10:53.594 { 00:10:53.594 "trtype": "VFIOUSER", 00:10:53.594 "adrfam": "IPv4", 00:10:53.594 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:53.594 "trsvcid": "0" 00:10:53.594 } 00:10:53.594 ], 00:10:53.594 "allow_any_host": true, 00:10:53.594 "hosts": [], 00:10:53.594 "serial_number": "SPDK2", 00:10:53.594 "model_number": "SPDK bdev Controller", 00:10:53.594 "max_namespaces": 32, 00:10:53.594 "min_cntlid": 1, 00:10:53.594 "max_cntlid": 65519, 00:10:53.594 "namespaces": [ 00:10:53.594 { 00:10:53.594 "nsid": 1, 00:10:53.594 "bdev_name": "Malloc2", 00:10:53.594 "name": "Malloc2", 00:10:53.594 "nguid": "B3CEC35C0BA04B91A424E887751F8FE6", 00:10:53.594 "uuid": "b3cec35c-0ba0-4b91-a424-e887751f8fe6" 00:10:53.594 } 00:10:53.594 ] 00:10:53.594 } 00:10:53.594 ] 00:10:53.594 00:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 822862 00:10:53.594 00:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:53.594 00:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:53.594 00:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:53.594 00:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:53.594 [2024-05-15 00:26:19.753277] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:10:53.594 [2024-05-15 00:26:19.753328] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822998 ] 00:10:53.855 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.855 [2024-05-15 00:26:19.789162] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:53.855 [2024-05-15 00:26:19.798701] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:53.855 [2024-05-15 00:26:19.798730] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9f57ae7000 00:10:53.855 [2024-05-15 00:26:19.799701] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:53.855 [2024-05-15 00:26:19.800706] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:53.855 [2024-05-15 00:26:19.801713] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:53.855 [2024-05-15 00:26:19.802717] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:53.855 [2024-05-15 00:26:19.803722] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:53.855 [2024-05-15 00:26:19.804729] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:53.855 [2024-05-15 00:26:19.805740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:53.855 [2024-05-15 00:26:19.806740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:53.855 [2024-05-15 00:26:19.807750] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:53.855 [2024-05-15 00:26:19.807775] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9f57adc000 00:10:53.855 [2024-05-15 00:26:19.808927] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:53.855 [2024-05-15 00:26:19.823567] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:53.855 [2024-05-15 00:26:19.823600] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:53.855 [2024-05-15 00:26:19.828714] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:53.855 [2024-05-15 00:26:19.828766] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:53.855 [2024-05-15 00:26:19.828854] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:53.855 [2024-05-15 00:26:19.828876] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:53.855 [2024-05-15 00:26:19.828886] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:53.855 [2024-05-15 00:26:19.829721] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:53.855 [2024-05-15 00:26:19.829742] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:53.855 [2024-05-15 00:26:19.829754] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:53.855 [2024-05-15 00:26:19.830727] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:53.855 [2024-05-15 00:26:19.830747] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:53.855 [2024-05-15 00:26:19.830761] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:53.855 [2024-05-15 00:26:19.831733] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:53.855 [2024-05-15 00:26:19.831753] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:53.855 [2024-05-15 00:26:19.832737] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:53.855 [2024-05-15 00:26:19.832756] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:53.855 [2024-05-15 00:26:19.832765] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:53.855 [2024-05-15 00:26:19.832776] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:53.855 [2024-05-15 00:26:19.832885] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:53.855 [2024-05-15 00:26:19.832893] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:53.855 [2024-05-15 00:26:19.832901] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:53.855 [2024-05-15 00:26:19.833746] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:53.855 [2024-05-15 00:26:19.834750] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:53.855 [2024-05-15 00:26:19.835758] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:53.855 [2024-05-15 00:26:19.836757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:53.855 [2024-05-15 00:26:19.836835] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:53.855 [2024-05-15 00:26:19.837777] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:53.855 [2024-05-15 00:26:19.837812] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:53.855 [2024-05-15 00:26:19.837822] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:53.855 [2024-05-15 00:26:19.837845] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:53.855 [2024-05-15 00:26:19.837862] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:53.855 [2024-05-15 00:26:19.837886] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:53.855 [2024-05-15 00:26:19.837896] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:53.855 [2024-05-15 00:26:19.837940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:53.855 [2024-05-15 00:26:19.845945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:53.855 [2024-05-15 00:26:19.845968] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:53.855 [2024-05-15 00:26:19.845978] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:53.855 [2024-05-15 00:26:19.845985] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:53.855 [2024-05-15 00:26:19.845993] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:53.855 [2024-05-15 00:26:19.846001] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:53.855 [2024-05-15 00:26:19.846008] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:53.856 [2024-05-15 00:26:19.846016] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.846034] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.846054] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.853940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.853969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.856 [2024-05-15 00:26:19.853983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.856 [2024-05-15 00:26:19.853995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.856 [2024-05-15 00:26:19.854007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.856 [2024-05-15 00:26:19.854015] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.854028] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.854041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.861943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.861961] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:53.856 [2024-05-15 00:26:19.861975] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.861988] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.861998] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.862011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.869939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.870005] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.870023] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.870037] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:53.856 [2024-05-15 00:26:19.870046] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:53.856 [2024-05-15 00:26:19.870056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.877953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.877982] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:53.856 [2024-05-15 00:26:19.877998] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.878012] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.878025] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:53.856 [2024-05-15 00:26:19.878034] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:53.856 [2024-05-15 00:26:19.878043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.885942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.885966] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.885981] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.885994] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:53.856 [2024-05-15 00:26:19.886003] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:53.856 [2024-05-15 00:26:19.886013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.893941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.893970] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.893984] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.893998] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.894008] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.894017] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.894026] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:53.856 [2024-05-15 00:26:19.894037] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:53.856 [2024-05-15 00:26:19.894046] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:53.856 [2024-05-15 00:26:19.894075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.901939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.901965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.909941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.909966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.917941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.917966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.925941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.925967] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:53.856 [2024-05-15 00:26:19.925978] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:53.856 [2024-05-15 00:26:19.925984] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:53.856 [2024-05-15 00:26:19.925990] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:53.856 [2024-05-15 00:26:19.926000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:53.856 [2024-05-15 00:26:19.926011] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:53.856 [2024-05-15 00:26:19.926019] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:53.856 [2024-05-15 00:26:19.926028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.926039] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:53.856 [2024-05-15 00:26:19.926047] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:53.856 [2024-05-15 00:26:19.926056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.926073] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:53.856 [2024-05-15 00:26:19.926083] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:53.856 [2024-05-15 00:26:19.926091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:53.856 [2024-05-15 00:26:19.933942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.933970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.933987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:53.856 [2024-05-15 00:26:19.934002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:53.856 ===================================================== 00:10:53.856 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:53.856 ===================================================== 00:10:53.856 Controller Capabilities/Features 00:10:53.856 ================================ 00:10:53.856 Vendor ID: 4e58 00:10:53.856 Subsystem Vendor ID: 4e58 00:10:53.856 Serial Number: SPDK2 00:10:53.856 Model Number: SPDK bdev Controller 00:10:53.856 Firmware Version: 24.05 00:10:53.856 Recommended Arb Burst: 6 00:10:53.856 IEEE OUI Identifier: 8d 6b 50 00:10:53.856 Multi-path I/O 00:10:53.856 May have multiple subsystem ports: Yes 00:10:53.856 May have multiple controllers: Yes 00:10:53.856 Associated with SR-IOV VF: No 00:10:53.856 Max Data Transfer Size: 131072 00:10:53.856 Max Number of Namespaces: 32 00:10:53.856 Max Number of I/O Queues: 127 00:10:53.856 NVMe Specification Version (VS): 1.3 00:10:53.856 NVMe Specification Version (Identify): 1.3 00:10:53.856 Maximum Queue Entries: 256 00:10:53.856 Contiguous Queues Required: Yes 00:10:53.856 Arbitration Mechanisms Supported 00:10:53.856 Weighted Round Robin: Not Supported 00:10:53.856 Vendor Specific: Not Supported 00:10:53.856 Reset Timeout: 15000 ms 00:10:53.856 Doorbell Stride: 4 bytes 00:10:53.856 NVM Subsystem Reset: Not Supported 00:10:53.856 Command Sets Supported 00:10:53.856 NVM Command Set: Supported 00:10:53.856 Boot Partition: Not Supported 00:10:53.857 Memory Page Size Minimum: 4096 bytes 00:10:53.857 Memory Page Size Maximum: 4096 bytes 00:10:53.857 Persistent Memory Region: Not Supported 00:10:53.857 Optional Asynchronous Events Supported 00:10:53.857 Namespace Attribute Notices: Supported 00:10:53.857 Firmware Activation Notices: Not Supported 00:10:53.857 ANA Change Notices: Not Supported 00:10:53.857 PLE Aggregate Log Change Notices: Not Supported 00:10:53.857 LBA Status Info Alert Notices: Not Supported 00:10:53.857 EGE Aggregate Log Change Notices: Not Supported 00:10:53.857 Normal NVM Subsystem Shutdown event: Not Supported 00:10:53.857 Zone Descriptor Change Notices: Not Supported 00:10:53.857 Discovery Log Change Notices: Not Supported 00:10:53.857 Controller Attributes 00:10:53.857 128-bit Host Identifier: Supported 00:10:53.857 Non-Operational Permissive Mode: Not Supported 00:10:53.857 NVM Sets: Not Supported 00:10:53.857 Read Recovery Levels: Not Supported 00:10:53.857 Endurance Groups: Not Supported 00:10:53.857 Predictable Latency Mode: Not Supported 00:10:53.857 Traffic Based Keep ALive: Not Supported 00:10:53.857 Namespace Granularity: Not Supported 00:10:53.857 SQ Associations: Not Supported 00:10:53.857 UUID List: Not Supported 00:10:53.857 Multi-Domain Subsystem: Not Supported 00:10:53.857 Fixed Capacity Management: Not Supported 00:10:53.857 Variable Capacity Management: Not Supported 00:10:53.857 Delete Endurance Group: Not Supported 00:10:53.857 Delete NVM Set: Not Supported 00:10:53.857 Extended LBA Formats Supported: Not Supported 00:10:53.857 Flexible Data Placement Supported: Not Supported 00:10:53.857 00:10:53.857 Controller Memory Buffer Support 00:10:53.857 ================================ 00:10:53.857 Supported: No 00:10:53.857 00:10:53.857 Persistent Memory Region Support 00:10:53.857 ================================ 00:10:53.857 Supported: No 00:10:53.857 00:10:53.857 Admin Command Set Attributes 00:10:53.857 ============================ 00:10:53.857 Security Send/Receive: Not Supported 00:10:53.857 Format NVM: Not Supported 00:10:53.857 Firmware Activate/Download: Not Supported 00:10:53.857 Namespace Management: Not Supported 00:10:53.857 Device Self-Test: Not Supported 00:10:53.857 Directives: Not Supported 00:10:53.857 NVMe-MI: Not Supported 00:10:53.857 Virtualization Management: Not Supported 00:10:53.857 Doorbell Buffer Config: Not Supported 00:10:53.857 Get LBA Status Capability: Not Supported 00:10:53.857 Command & Feature Lockdown Capability: Not Supported 00:10:53.857 Abort Command Limit: 4 00:10:53.857 Async Event Request Limit: 4 00:10:53.857 Number of Firmware Slots: N/A 00:10:53.857 Firmware Slot 1 Read-Only: N/A 00:10:53.857 Firmware Activation Without Reset: N/A 00:10:53.857 Multiple Update Detection Support: N/A 00:10:53.857 Firmware Update Granularity: No Information Provided 00:10:53.857 Per-Namespace SMART Log: No 00:10:53.857 Asymmetric Namespace Access Log Page: Not Supported 00:10:53.857 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:53.857 Command Effects Log Page: Supported 00:10:53.857 Get Log Page Extended Data: Supported 00:10:53.857 Telemetry Log Pages: Not Supported 00:10:53.857 Persistent Event Log Pages: Not Supported 00:10:53.857 Supported Log Pages Log Page: May Support 00:10:53.857 Commands Supported & Effects Log Page: Not Supported 00:10:53.857 Feature Identifiers & Effects Log Page:May Support 00:10:53.857 NVMe-MI Commands & Effects Log Page: May Support 00:10:53.857 Data Area 4 for Telemetry Log: Not Supported 00:10:53.857 Error Log Page Entries Supported: 128 00:10:53.857 Keep Alive: Supported 00:10:53.857 Keep Alive Granularity: 10000 ms 00:10:53.857 00:10:53.857 NVM Command Set Attributes 00:10:53.857 ========================== 00:10:53.857 Submission Queue Entry Size 00:10:53.857 Max: 64 00:10:53.857 Min: 64 00:10:53.857 Completion Queue Entry Size 00:10:53.857 Max: 16 00:10:53.857 Min: 16 00:10:53.857 Number of Namespaces: 32 00:10:53.857 Compare Command: Supported 00:10:53.857 Write Uncorrectable Command: Not Supported 00:10:53.857 Dataset Management Command: Supported 00:10:53.857 Write Zeroes Command: Supported 00:10:53.857 Set Features Save Field: Not Supported 00:10:53.857 Reservations: Not Supported 00:10:53.857 Timestamp: Not Supported 00:10:53.857 Copy: Supported 00:10:53.857 Volatile Write Cache: Present 00:10:53.857 Atomic Write Unit (Normal): 1 00:10:53.857 Atomic Write Unit (PFail): 1 00:10:53.857 Atomic Compare & Write Unit: 1 00:10:53.857 Fused Compare & Write: Supported 00:10:53.857 Scatter-Gather List 00:10:53.857 SGL Command Set: Supported (Dword aligned) 00:10:53.857 SGL Keyed: Not Supported 00:10:53.857 SGL Bit Bucket Descriptor: Not Supported 00:10:53.857 SGL Metadata Pointer: Not Supported 00:10:53.857 Oversized SGL: Not Supported 00:10:53.857 SGL Metadata Address: Not Supported 00:10:53.857 SGL Offset: Not Supported 00:10:53.857 Transport SGL Data Block: Not Supported 00:10:53.857 Replay Protected Memory Block: Not Supported 00:10:53.857 00:10:53.857 Firmware Slot Information 00:10:53.857 ========================= 00:10:53.857 Active slot: 1 00:10:53.857 Slot 1 Firmware Revision: 24.05 00:10:53.857 00:10:53.857 00:10:53.857 Commands Supported and Effects 00:10:53.857 ============================== 00:10:53.857 Admin Commands 00:10:53.857 -------------- 00:10:53.857 Get Log Page (02h): Supported 00:10:53.857 Identify (06h): Supported 00:10:53.857 Abort (08h): Supported 00:10:53.857 Set Features (09h): Supported 00:10:53.857 Get Features (0Ah): Supported 00:10:53.857 Asynchronous Event Request (0Ch): Supported 00:10:53.857 Keep Alive (18h): Supported 00:10:53.857 I/O Commands 00:10:53.857 ------------ 00:10:53.857 Flush (00h): Supported LBA-Change 00:10:53.857 Write (01h): Supported LBA-Change 00:10:53.857 Read (02h): Supported 00:10:53.857 Compare (05h): Supported 00:10:53.857 Write Zeroes (08h): Supported LBA-Change 00:10:53.857 Dataset Management (09h): Supported LBA-Change 00:10:53.857 Copy (19h): Supported LBA-Change 00:10:53.857 Unknown (79h): Supported LBA-Change 00:10:53.857 Unknown (7Ah): Supported 00:10:53.857 00:10:53.857 Error Log 00:10:53.857 ========= 00:10:53.857 00:10:53.857 Arbitration 00:10:53.857 =========== 00:10:53.857 Arbitration Burst: 1 00:10:53.857 00:10:53.857 Power Management 00:10:53.857 ================ 00:10:53.857 Number of Power States: 1 00:10:53.857 Current Power State: Power State #0 00:10:53.857 Power State #0: 00:10:53.857 Max Power: 0.00 W 00:10:53.857 Non-Operational State: Operational 00:10:53.857 Entry Latency: Not Reported 00:10:53.857 Exit Latency: Not Reported 00:10:53.857 Relative Read Throughput: 0 00:10:53.857 Relative Read Latency: 0 00:10:53.857 Relative Write Throughput: 0 00:10:53.857 Relative Write Latency: 0 00:10:53.857 Idle Power: Not Reported 00:10:53.857 Active Power: Not Reported 00:10:53.857 Non-Operational Permissive Mode: Not Supported 00:10:53.857 00:10:53.857 Health Information 00:10:53.857 ================== 00:10:53.857 Critical Warnings: 00:10:53.857 Available Spare Space: OK 00:10:53.857 Temperature: OK 00:10:53.857 Device Reliability: OK 00:10:53.857 Read Only: No 00:10:53.857 Volatile Memory Backup: OK 00:10:53.857 Current Temperature: 0 Kelvin (-2[2024-05-15 00:26:19.934130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:53.857 [2024-05-15 00:26:19.941943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:53.857 [2024-05-15 00:26:19.941988] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:53.857 [2024-05-15 00:26:19.942005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.857 [2024-05-15 00:26:19.942016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.857 [2024-05-15 00:26:19.942025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.858 [2024-05-15 00:26:19.942035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.858 [2024-05-15 00:26:19.942121] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:53.858 [2024-05-15 00:26:19.942141] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:53.858 [2024-05-15 00:26:19.943119] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:53.858 [2024-05-15 00:26:19.943187] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:53.858 [2024-05-15 00:26:19.943202] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:53.858 [2024-05-15 00:26:19.944130] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:53.858 [2024-05-15 00:26:19.944155] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:53.858 [2024-05-15 00:26:19.944224] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:53.858 [2024-05-15 00:26:19.945457] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:53.858 73 Celsius) 00:10:53.858 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:53.858 Available Spare: 0% 00:10:53.858 Available Spare Threshold: 0% 00:10:53.858 Life Percentage Used: 0% 00:10:53.858 Data Units Read: 0 00:10:53.858 Data Units Written: 0 00:10:53.858 Host Read Commands: 0 00:10:53.858 Host Write Commands: 0 00:10:53.858 Controller Busy Time: 0 minutes 00:10:53.858 Power Cycles: 0 00:10:53.858 Power On Hours: 0 hours 00:10:53.858 Unsafe Shutdowns: 0 00:10:53.858 Unrecoverable Media Errors: 0 00:10:53.858 Lifetime Error Log Entries: 0 00:10:53.858 Warning Temperature Time: 0 minutes 00:10:53.858 Critical Temperature Time: 0 minutes 00:10:53.858 00:10:53.858 Number of Queues 00:10:53.858 ================ 00:10:53.858 Number of I/O Submission Queues: 127 00:10:53.858 Number of I/O Completion Queues: 127 00:10:53.858 00:10:53.858 Active Namespaces 00:10:53.858 ================= 00:10:53.858 Namespace ID:1 00:10:53.858 Error Recovery Timeout: Unlimited 00:10:53.858 Command Set Identifier: NVM (00h) 00:10:53.858 Deallocate: Supported 00:10:53.858 Deallocated/Unwritten Error: Not Supported 00:10:53.858 Deallocated Read Value: Unknown 00:10:53.858 Deallocate in Write Zeroes: Not Supported 00:10:53.858 Deallocated Guard Field: 0xFFFF 00:10:53.858 Flush: Supported 00:10:53.858 Reservation: Supported 00:10:53.858 Namespace Sharing Capabilities: Multiple Controllers 00:10:53.858 Size (in LBAs): 131072 (0GiB) 00:10:53.858 Capacity (in LBAs): 131072 (0GiB) 00:10:53.858 Utilization (in LBAs): 131072 (0GiB) 00:10:53.858 NGUID: B3CEC35C0BA04B91A424E887751F8FE6 00:10:53.858 UUID: b3cec35c-0ba0-4b91-a424-e887751f8fe6 00:10:53.858 Thin Provisioning: Not Supported 00:10:53.858 Per-NS Atomic Units: Yes 00:10:53.858 Atomic Boundary Size (Normal): 0 00:10:53.858 Atomic Boundary Size (PFail): 0 00:10:53.858 Atomic Boundary Offset: 0 00:10:53.858 Maximum Single Source Range Length: 65535 00:10:53.858 Maximum Copy Length: 65535 00:10:53.858 Maximum Source Range Count: 1 00:10:53.858 NGUID/EUI64 Never Reused: No 00:10:53.858 Namespace Write Protected: No 00:10:53.858 Number of LBA Formats: 1 00:10:53.858 Current LBA Format: LBA Format #00 00:10:53.858 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:53.858 00:10:53.858 00:26:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:53.858 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.116 [2024-05-15 00:26:20.173981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:59.380 Initializing NVMe Controllers 00:10:59.380 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:59.381 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:59.381 Initialization complete. Launching workers. 00:10:59.381 ======================================================== 00:10:59.381 Latency(us) 00:10:59.381 Device Information : IOPS MiB/s Average min max 00:10:59.381 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33588.53 131.21 3809.81 1166.37 7395.93 00:10:59.381 ======================================================== 00:10:59.381 Total : 33588.53 131.21 3809.81 1166.37 7395.93 00:10:59.381 00:10:59.381 [2024-05-15 00:26:25.280278] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:59.381 00:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:59.381 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.381 [2024-05-15 00:26:25.518017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:04.646 Initializing NVMe Controllers 00:11:04.646 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:04.646 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:04.646 Initialization complete. Launching workers. 00:11:04.646 ======================================================== 00:11:04.646 Latency(us) 00:11:04.646 Device Information : IOPS MiB/s Average min max 00:11:04.646 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31115.46 121.54 4113.07 1209.10 8361.44 00:11:04.646 ======================================================== 00:11:04.646 Total : 31115.46 121.54 4113.07 1209.10 8361.44 00:11:04.646 00:11:04.646 [2024-05-15 00:26:30.540597] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:04.646 00:26:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:04.646 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.646 [2024-05-15 00:26:30.769545] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:09.912 [2024-05-15 00:26:35.908081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:09.912 Initializing NVMe Controllers 00:11:09.912 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:09.912 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:09.912 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:09.912 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:09.912 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:09.912 Initialization complete. Launching workers. 00:11:09.912 Starting thread on core 2 00:11:09.912 Starting thread on core 3 00:11:09.912 Starting thread on core 1 00:11:09.913 00:26:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:09.913 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.170 [2024-05-15 00:26:36.229676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:13.453 [2024-05-15 00:26:39.291338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:13.453 Initializing NVMe Controllers 00:11:13.453 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:13.453 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:13.453 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:13.453 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:13.453 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:13.453 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:13.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:13.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:13.454 Initialization complete. Launching workers. 00:11:13.454 Starting thread on core 1 with urgent priority queue 00:11:13.454 Starting thread on core 2 with urgent priority queue 00:11:13.454 Starting thread on core 3 with urgent priority queue 00:11:13.454 Starting thread on core 0 with urgent priority queue 00:11:13.454 SPDK bdev Controller (SPDK2 ) core 0: 5247.67 IO/s 19.06 secs/100000 ios 00:11:13.454 SPDK bdev Controller (SPDK2 ) core 1: 5144.00 IO/s 19.44 secs/100000 ios 00:11:13.454 SPDK bdev Controller (SPDK2 ) core 2: 4946.67 IO/s 20.22 secs/100000 ios 00:11:13.454 SPDK bdev Controller (SPDK2 ) core 3: 5068.67 IO/s 19.73 secs/100000 ios 00:11:13.454 ======================================================== 00:11:13.454 00:11:13.454 00:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:13.454 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.454 [2024-05-15 00:26:39.598833] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:13.454 Initializing NVMe Controllers 00:11:13.454 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:13.454 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:13.454 Namespace ID: 1 size: 0GB 00:11:13.454 Initialization complete. 00:11:13.454 INFO: using host memory buffer for IO 00:11:13.454 Hello world! 00:11:13.454 [2024-05-15 00:26:39.608973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:13.710 00:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:13.710 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.967 [2024-05-15 00:26:39.927471] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:14.899 Initializing NVMe Controllers 00:11:14.899 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:14.899 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:14.899 Initialization complete. Launching workers. 00:11:14.899 submit (in ns) avg, min, max = 8450.4, 3502.2, 4039867.8 00:11:14.899 complete (in ns) avg, min, max = 27605.1, 2073.3, 5012876.7 00:11:14.899 00:11:14.899 Submit histogram 00:11:14.899 ================ 00:11:14.899 Range in us Cumulative Count 00:11:14.899 3.484 - 3.508: 0.0078% ( 1) 00:11:14.899 3.508 - 3.532: 0.2118% ( 26) 00:11:14.899 3.532 - 3.556: 1.2162% ( 128) 00:11:14.899 3.556 - 3.579: 3.3739% ( 275) 00:11:14.899 3.579 - 3.603: 7.5088% ( 527) 00:11:14.899 3.603 - 3.627: 14.5077% ( 892) 00:11:14.899 3.627 - 3.650: 22.9345% ( 1074) 00:11:14.899 3.650 - 3.674: 29.9019% ( 888) 00:11:14.899 3.674 - 3.698: 36.0063% ( 778) 00:11:14.899 3.698 - 3.721: 41.7811% ( 736) 00:11:14.899 3.721 - 3.745: 46.5202% ( 604) 00:11:14.899 3.745 - 3.769: 50.3727% ( 491) 00:11:14.899 3.769 - 3.793: 53.9976% ( 462) 00:11:14.899 3.793 - 3.816: 57.1675% ( 404) 00:11:14.899 3.816 - 3.840: 61.0200% ( 491) 00:11:14.899 3.840 - 3.864: 65.1393% ( 525) 00:11:14.899 3.864 - 3.887: 69.6508% ( 575) 00:11:14.899 3.887 - 3.911: 74.3586% ( 600) 00:11:14.899 3.911 - 3.935: 77.8737% ( 448) 00:11:14.899 3.935 - 3.959: 80.7375% ( 365) 00:11:14.899 3.959 - 3.982: 83.0836% ( 299) 00:11:14.899 3.982 - 4.006: 85.2727% ( 279) 00:11:14.899 4.006 - 4.030: 86.8968% ( 207) 00:11:14.899 4.030 - 4.053: 88.3405% ( 184) 00:11:14.899 4.053 - 4.077: 89.4390% ( 140) 00:11:14.900 4.077 - 4.101: 90.5767% ( 145) 00:11:14.900 4.101 - 4.124: 91.6673% ( 139) 00:11:14.900 4.124 - 4.148: 92.7344% ( 136) 00:11:14.900 4.148 - 4.172: 93.5269% ( 101) 00:11:14.900 4.172 - 4.196: 94.0447% ( 66) 00:11:14.900 4.196 - 4.219: 94.4449% ( 51) 00:11:14.900 4.219 - 4.243: 94.8764% ( 55) 00:11:14.900 4.243 - 4.267: 95.1746% ( 38) 00:11:14.900 4.267 - 4.290: 95.4492% ( 35) 00:11:14.900 4.290 - 4.314: 95.6767% ( 29) 00:11:14.900 4.314 - 4.338: 95.8258% ( 19) 00:11:14.900 4.338 - 4.361: 95.9435% ( 15) 00:11:14.900 4.361 - 4.385: 96.1083% ( 21) 00:11:14.900 4.385 - 4.409: 96.2103% ( 13) 00:11:14.900 4.409 - 4.433: 96.2652% ( 7) 00:11:14.900 4.433 - 4.456: 96.3123% ( 6) 00:11:14.900 4.456 - 4.480: 96.3437% ( 4) 00:11:14.900 4.480 - 4.504: 96.3750% ( 4) 00:11:14.900 4.504 - 4.527: 96.4064% ( 4) 00:11:14.900 4.527 - 4.551: 96.4378% ( 4) 00:11:14.900 4.551 - 4.575: 96.4535% ( 2) 00:11:14.900 4.575 - 4.599: 96.4849% ( 4) 00:11:14.900 4.599 - 4.622: 96.5163% ( 4) 00:11:14.900 4.622 - 4.646: 96.5241% ( 1) 00:11:14.900 4.646 - 4.670: 96.5320% ( 1) 00:11:14.900 4.670 - 4.693: 96.5398% ( 1) 00:11:14.900 4.693 - 4.717: 96.5555% ( 2) 00:11:14.900 4.717 - 4.741: 96.5634% ( 1) 00:11:14.900 4.741 - 4.764: 96.5712% ( 1) 00:11:14.900 4.788 - 4.812: 96.5869% ( 2) 00:11:14.900 4.812 - 4.836: 96.5947% ( 1) 00:11:14.900 4.836 - 4.859: 96.6026% ( 1) 00:11:14.900 4.859 - 4.883: 96.6183% ( 2) 00:11:14.900 4.883 - 4.907: 96.6340% ( 2) 00:11:14.900 4.907 - 4.930: 96.6654% ( 4) 00:11:14.900 4.930 - 4.954: 96.6732% ( 1) 00:11:14.900 4.954 - 4.978: 96.7360% ( 8) 00:11:14.900 4.978 - 5.001: 96.7831% ( 6) 00:11:14.900 5.001 - 5.025: 96.8458% ( 8) 00:11:14.900 5.025 - 5.049: 96.8537% ( 1) 00:11:14.900 5.049 - 5.073: 96.8929% ( 5) 00:11:14.900 5.073 - 5.096: 96.9478% ( 7) 00:11:14.900 5.096 - 5.120: 96.9949% ( 6) 00:11:14.900 5.120 - 5.144: 97.0577% ( 8) 00:11:14.900 5.144 - 5.167: 97.0969% ( 5) 00:11:14.900 5.167 - 5.191: 97.1361% ( 5) 00:11:14.900 5.191 - 5.215: 97.1675% ( 4) 00:11:14.900 5.215 - 5.239: 97.2381% ( 9) 00:11:14.900 5.239 - 5.262: 97.2852% ( 6) 00:11:14.900 5.262 - 5.286: 97.3087% ( 3) 00:11:14.900 5.286 - 5.310: 97.3637% ( 7) 00:11:14.900 5.310 - 5.333: 97.4107% ( 6) 00:11:14.900 5.333 - 5.357: 97.4186% ( 1) 00:11:14.900 5.357 - 5.381: 97.4421% ( 3) 00:11:14.900 5.381 - 5.404: 97.4657% ( 3) 00:11:14.900 5.404 - 5.428: 97.4892% ( 3) 00:11:14.900 5.452 - 5.476: 97.4971% ( 1) 00:11:14.900 5.476 - 5.499: 97.5128% ( 2) 00:11:14.900 5.499 - 5.523: 97.5598% ( 6) 00:11:14.900 5.523 - 5.547: 97.5755% ( 2) 00:11:14.900 5.547 - 5.570: 97.5834% ( 1) 00:11:14.900 5.594 - 5.618: 97.5912% ( 1) 00:11:14.900 5.665 - 5.689: 97.6148% ( 3) 00:11:14.900 5.689 - 5.713: 97.6226% ( 1) 00:11:14.900 5.736 - 5.760: 97.6304% ( 1) 00:11:14.900 5.831 - 5.855: 97.6383% ( 1) 00:11:14.900 5.879 - 5.902: 97.6461% ( 1) 00:11:14.900 5.926 - 5.950: 97.6540% ( 1) 00:11:14.900 5.973 - 5.997: 97.6697% ( 2) 00:11:14.900 6.021 - 6.044: 97.6775% ( 1) 00:11:14.900 6.210 - 6.258: 97.6932% ( 2) 00:11:14.900 6.353 - 6.400: 97.7011% ( 1) 00:11:14.900 6.447 - 6.495: 97.7089% ( 1) 00:11:14.900 6.590 - 6.637: 97.7168% ( 1) 00:11:14.900 6.684 - 6.732: 97.7246% ( 1) 00:11:14.900 6.732 - 6.779: 97.7403% ( 2) 00:11:14.900 6.827 - 6.874: 97.7481% ( 1) 00:11:14.900 6.874 - 6.921: 97.7560% ( 1) 00:11:14.900 6.921 - 6.969: 97.7638% ( 1) 00:11:14.900 6.969 - 7.016: 97.7795% ( 2) 00:11:14.900 7.111 - 7.159: 97.7874% ( 1) 00:11:14.900 7.206 - 7.253: 97.8031% ( 2) 00:11:14.900 7.253 - 7.301: 97.8109% ( 1) 00:11:14.900 7.301 - 7.348: 97.8266% ( 2) 00:11:14.900 7.396 - 7.443: 97.8423% ( 2) 00:11:14.900 7.443 - 7.490: 97.8501% ( 1) 00:11:14.900 7.538 - 7.585: 97.8580% ( 1) 00:11:14.900 7.680 - 7.727: 97.8658% ( 1) 00:11:14.900 7.775 - 7.822: 97.8737% ( 1) 00:11:14.900 7.822 - 7.870: 97.8815% ( 1) 00:11:14.900 7.870 - 7.917: 97.8894% ( 1) 00:11:14.900 7.917 - 7.964: 97.8972% ( 1) 00:11:14.900 7.964 - 8.012: 97.9051% ( 1) 00:11:14.900 8.059 - 8.107: 97.9208% ( 2) 00:11:14.900 8.107 - 8.154: 97.9286% ( 1) 00:11:14.900 8.154 - 8.201: 97.9364% ( 1) 00:11:14.900 8.201 - 8.249: 97.9521% ( 2) 00:11:14.900 8.249 - 8.296: 97.9678% ( 2) 00:11:14.900 8.296 - 8.344: 97.9835% ( 2) 00:11:14.900 8.344 - 8.391: 97.9992% ( 2) 00:11:14.900 8.391 - 8.439: 98.0071% ( 1) 00:11:14.900 8.439 - 8.486: 98.0149% ( 1) 00:11:14.900 8.533 - 8.581: 98.0228% ( 1) 00:11:14.900 8.581 - 8.628: 98.0541% ( 4) 00:11:14.900 8.628 - 8.676: 98.0620% ( 1) 00:11:14.900 8.723 - 8.770: 98.0777% ( 2) 00:11:14.900 8.770 - 8.818: 98.0934% ( 2) 00:11:14.900 8.818 - 8.865: 98.1169% ( 3) 00:11:14.900 8.913 - 8.960: 98.1248% ( 1) 00:11:14.900 9.007 - 9.055: 98.1483% ( 3) 00:11:14.900 9.055 - 9.102: 98.1718% ( 3) 00:11:14.900 9.244 - 9.292: 98.1797% ( 1) 00:11:14.900 9.292 - 9.339: 98.1875% ( 1) 00:11:14.900 9.387 - 9.434: 98.2111% ( 3) 00:11:14.900 9.434 - 9.481: 98.2268% ( 2) 00:11:14.900 9.576 - 9.624: 98.2346% ( 1) 00:11:14.900 9.624 - 9.671: 98.2424% ( 1) 00:11:14.900 9.671 - 9.719: 98.2503% ( 1) 00:11:14.900 9.719 - 9.766: 98.2581% ( 1) 00:11:14.900 9.813 - 9.861: 98.2660% ( 1) 00:11:14.900 9.908 - 9.956: 98.2817% ( 2) 00:11:14.900 10.050 - 10.098: 98.2895% ( 1) 00:11:14.900 10.240 - 10.287: 98.3052% ( 2) 00:11:14.900 10.335 - 10.382: 98.3209% ( 2) 00:11:14.900 10.572 - 10.619: 98.3366% ( 2) 00:11:14.900 10.619 - 10.667: 98.3444% ( 1) 00:11:14.900 10.667 - 10.714: 98.3523% ( 1) 00:11:14.900 10.809 - 10.856: 98.3601% ( 1) 00:11:14.900 10.856 - 10.904: 98.3758% ( 2) 00:11:14.900 11.046 - 11.093: 98.3915% ( 2) 00:11:14.900 11.141 - 11.188: 98.3994% ( 1) 00:11:14.900 11.188 - 11.236: 98.4072% ( 1) 00:11:14.900 11.520 - 11.567: 98.4308% ( 3) 00:11:14.900 11.615 - 11.662: 98.4464% ( 2) 00:11:14.900 11.662 - 11.710: 98.4621% ( 2) 00:11:14.900 11.852 - 11.899: 98.4700% ( 1) 00:11:14.900 11.899 - 11.947: 98.4857% ( 2) 00:11:14.900 11.947 - 11.994: 98.5014% ( 2) 00:11:14.900 12.136 - 12.231: 98.5171% ( 2) 00:11:14.900 12.231 - 12.326: 98.5406% ( 3) 00:11:14.900 12.326 - 12.421: 98.5485% ( 1) 00:11:14.900 12.421 - 12.516: 98.5563% ( 1) 00:11:14.900 12.516 - 12.610: 98.5641% ( 1) 00:11:14.900 12.610 - 12.705: 98.5798% ( 2) 00:11:14.900 12.895 - 12.990: 98.5877% ( 1) 00:11:14.900 12.990 - 13.084: 98.5955% ( 1) 00:11:14.900 13.274 - 13.369: 98.6034% ( 1) 00:11:14.900 13.464 - 13.559: 98.6112% ( 1) 00:11:14.900 13.559 - 13.653: 98.6191% ( 1) 00:11:14.900 13.653 - 13.748: 98.6505% ( 4) 00:11:14.900 13.748 - 13.843: 98.6583% ( 1) 00:11:14.900 13.938 - 14.033: 98.6740% ( 2) 00:11:14.900 14.033 - 14.127: 98.6818% ( 1) 00:11:14.900 14.412 - 14.507: 98.6975% ( 2) 00:11:14.900 14.601 - 14.696: 98.7132% ( 2) 00:11:14.900 14.696 - 14.791: 98.7211% ( 1) 00:11:14.900 14.791 - 14.886: 98.7289% ( 1) 00:11:14.900 14.886 - 14.981: 98.7368% ( 1) 00:11:14.900 15.076 - 15.170: 98.7446% ( 1) 00:11:14.900 15.265 - 15.360: 98.7525% ( 1) 00:11:14.900 15.644 - 15.739: 98.7603% ( 1) 00:11:14.900 17.067 - 17.161: 98.7681% ( 1) 00:11:14.900 17.161 - 17.256: 98.7838% ( 2) 00:11:14.900 17.256 - 17.351: 98.7917% ( 1) 00:11:14.900 17.351 - 17.446: 98.7995% ( 1) 00:11:14.900 17.446 - 17.541: 98.8152% ( 2) 00:11:14.900 17.541 - 17.636: 98.8780% ( 8) 00:11:14.900 17.636 - 17.730: 98.9329% ( 7) 00:11:14.900 17.730 - 17.825: 98.9565% ( 3) 00:11:14.900 17.825 - 17.920: 99.0271% ( 9) 00:11:14.900 17.920 - 18.015: 99.1055% ( 10) 00:11:14.900 18.015 - 18.110: 99.1605% ( 7) 00:11:14.900 18.110 - 18.204: 99.1997% ( 5) 00:11:14.900 18.204 - 18.299: 99.2938% ( 12) 00:11:14.900 18.299 - 18.394: 99.3645% ( 9) 00:11:14.900 18.394 - 18.489: 99.3958% ( 4) 00:11:14.900 18.489 - 18.584: 99.4821% ( 11) 00:11:14.901 18.584 - 18.679: 99.5371% ( 7) 00:11:14.901 18.679 - 18.773: 99.5763% ( 5) 00:11:14.901 18.773 - 18.868: 99.6391% ( 8) 00:11:14.901 18.868 - 18.963: 99.6548% ( 2) 00:11:14.901 18.963 - 19.058: 99.6862% ( 4) 00:11:14.901 19.153 - 19.247: 99.6940% ( 1) 00:11:14.901 19.247 - 19.342: 99.7097% ( 2) 00:11:14.901 19.342 - 19.437: 99.7254% ( 2) 00:11:14.901 19.627 - 19.721: 99.7332% ( 1) 00:11:14.901 19.721 - 19.816: 99.7489% ( 2) 00:11:14.901 19.911 - 20.006: 99.7568% ( 1) 00:11:14.901 20.764 - 20.859: 99.7646% ( 1) 00:11:14.901 21.807 - 21.902: 99.7725% ( 1) 00:11:14.901 21.997 - 22.092: 99.7803% ( 1) 00:11:14.901 22.092 - 22.187: 99.7882% ( 1) 00:11:14.901 22.850 - 22.945: 99.7960% ( 1) 00:11:14.901 23.893 - 23.988: 99.8038% ( 1) 00:11:14.901 23.988 - 24.083: 99.8117% ( 1) 00:11:14.901 25.790 - 25.979: 99.8195% ( 1) 00:11:14.901 25.979 - 26.169: 99.8274% ( 1) 00:11:14.901 26.359 - 26.548: 99.8352% ( 1) 00:11:14.901 26.548 - 26.738: 99.8431% ( 1) 00:11:14.901 26.927 - 27.117: 99.8509% ( 1) 00:11:14.901 27.117 - 27.307: 99.8588% ( 1) 00:11:14.901 28.255 - 28.444: 99.8666% ( 1) 00:11:14.901 28.444 - 28.634: 99.8745% ( 1) 00:11:14.901 28.824 - 29.013: 99.8823% ( 1) 00:11:14.901 32.427 - 32.616: 99.8902% ( 1) 00:11:14.901 3980.705 - 4004.978: 99.9765% ( 11) 00:11:14.901 4004.978 - 4029.250: 99.9922% ( 2) 00:11:14.901 4029.250 - 4053.523: 100.0000% ( 1) 00:11:14.901 00:11:14.901 Complete histogram 00:11:14.901 ================== 00:11:14.901 Range in us Cumulative Count 00:11:14.901 2.062 - 2.074: 0.0157% ( 2) 00:11:14.901 2.074 - 2.086: 5.1393% ( 653) 00:11:14.901 2.086 - 2.098: 22.5500% ( 2219) 00:11:14.901 2.098 - 2.110: 25.0922% ( 324) 00:11:14.901 2.110 - 2.121: 37.6618% ( 1602) 00:11:14.901 2.121 - 2.133: 47.9090% ( 1306) 00:11:14.901 2.133 - 2.145: 49.4547% ( 197) 00:11:14.901 2.145 - 2.157: 55.4963% ( 770) 00:11:14.901 2.157 - 2.169: 61.4045% ( 753) 00:11:14.901 2.169 - 2.181: 62.8953% ( 190) 00:11:14.901 2.181 - 2.193: 68.3954% ( 701) 00:11:14.901 2.193 - 2.204: 72.4127% ( 512) 00:11:14.901 2.204 - 2.216: 73.3307% ( 117) 00:11:14.901 2.216 - 2.228: 77.3715% ( 515) 00:11:14.901 2.228 - 2.240: 82.5893% ( 665) 00:11:14.901 2.240 - 2.252: 83.8760% ( 164) 00:11:14.901 2.252 - 2.264: 87.4068% ( 450) 00:11:14.901 2.264 - 2.276: 90.2001% ( 356) 00:11:14.901 2.276 - 2.287: 91.2201% ( 130) 00:11:14.901 2.287 - 2.299: 92.6089% ( 177) 00:11:14.901 2.299 - 2.311: 93.7701% ( 148) 00:11:14.901 2.311 - 2.323: 94.2958% ( 67) 00:11:14.901 2.323 - 2.335: 94.4998% ( 26) 00:11:14.901 2.335 - 2.347: 94.5940% ( 12) 00:11:14.901 2.347 - 2.359: 94.6724% ( 10) 00:11:14.901 2.359 - 2.370: 94.7901% ( 15) 00:11:14.901 2.370 - 2.382: 95.1118% ( 41) 00:11:14.901 2.382 - 2.394: 95.4335% ( 41) 00:11:14.901 2.394 - 2.406: 95.6846% ( 32) 00:11:14.901 2.406 - 2.418: 95.8729% ( 24) 00:11:14.901 2.418 - 2.430: 96.1083% ( 30) 00:11:14.901 2.430 - 2.441: 96.4064% ( 38) 00:11:14.901 2.441 - 2.453: 96.6261% ( 28) 00:11:14.901 2.453 - 2.465: 96.8929% ( 34) 00:11:14.901 2.465 - 2.477: 97.0655% ( 22) 00:11:14.901 2.477 - 2.489: 97.2146% ( 19) 00:11:14.901 2.489 - 2.501: 97.3166% ( 13) 00:11:14.901 2.501 - 2.513: 97.4343% ( 15) 00:11:14.901 2.513 - 2.524: 97.5363% ( 13) 00:11:14.901 2.524 - 2.536: 97.6854% ( 19) 00:11:14.901 2.536 - 2.548: 97.7560% ( 9) 00:11:14.901 2.548 - 2.560: 97.8109% ( 7) 00:11:14.901 2.560 - 2.572: 97.8894% ( 10) 00:11:14.901 2.572 - 2.584: 97.9129% ( 3) 00:11:14.901 2.584 - 2.596: 97.9286% ( 2) 00:11:14.901 2.596 - 2.607: 97.9521% ( 3) 00:11:14.901 2.607 - 2.619: 97.9600% ( 1) 00:11:14.901 2.619 - 2.631: 97.9757% ( 2) 00:11:14.901 2.655 - 2.667: 97.9914% ( 2) 00:11:14.901 2.667 - 2.679: 97.9992% ( 1) 00:11:14.901 2.679 - 2.690: 98.0228% ( 3) 00:11:14.901 2.690 - 2.702: 98.0384% ( 2) 00:11:14.901 2.702 - 2.714: 98.0620% ( 3) 00:11:14.901 2.714 - 2.726: 98.0698% ( 1) 00:11:14.901 2.738 - 2.750: 98.0855% ( 2) 00:11:14.901 2.750 - 2.761: 98.0934% ( 1) 00:11:14.901 2.761 - 2.773: 98.1012% ( 1) 00:11:14.901 2.797 - 2.809: 98.1091% ( 1) 00:11:14.901 2.821 - 2.833: 98.1169% ( 1) 00:11:14.901 2.844 - 2.856: 98.1248% ( 1) 00:11:14.901 2.856 - 2.868: 98.1404% ( 2) 00:11:14.901 2.880 - 2.892: 98.1561% ( 2) 00:11:14.901 2.916 - 2.927: 98.1640% ( 1) 00:11:14.901 2.927 - 2.939: 98.1718% ( 1) 00:11:14.901 3.022 - 3.034: 98.1797% ( 1) 00:11:14.901 3.034 - 3.058: 98.2189% ( 5) 00:11:14.901 3.176 - 3.200: 98.2268% ( 1) 00:11:14.901 3.295 - 3.319: 98.2346% ( 1) 00:11:14.901 3.342 - 3.366: 98.2424% ( 1) 00:11:14.901 3.390 - 3.413: 98.2503% ( 1) 00:11:14.901 3.461 - 3.484: 98.2738% ( 3) 00:11:14.901 3.484 - 3.508: 98.2974% ( 3) 00:11:14.901 3.508 - 3.532: 98.3209% ( 3) 00:11:14.901 3.532 - 3.556: 98.3288% ( 1) 00:11:14.901 3.556 - 3.579: 98.3366% ( 1) 00:11:14.901 3.603 - 3.627: 98.3444% ( 1) 00:11:14.901 3.650 - 3.674: 98.3523% ( 1) 00:11:14.901 3.674 - 3.698: 98.3680% ( 2) 00:11:14.901 3.698 - 3.721: 98.3837% ( 2) 00:11:14.901 3.745 - 3.769: 98.3915% ( 1) 00:11:14.901 3.793 - 3.816: 98.3994% ( 1) 00:11:14.901 3.864 - 3.887: 98.4072% ( 1) 00:11:14.901 3.887 - 3.911: 98.4229% ( 2) 00:11:14.901 3.911 - 3.935: 98.4386% ( 2) 00:11:14.901 4.053 - 4.077: 98.4464% ( 1) 00:11:14.901 4.148 - 4.172: 98.4543% ( 1) 00:11:14.901 4.290 - 4.314: 98.4621% ( 1) 00:11:14.901 5.404 - 5.428: 98.4700% ( 1) 00:11:14.901 5.926 - 5.950: 98.4778% ( 1) 00:11:14.901 6.542 - 6.590: 98.4935% ( 2) 00:11:14.901 6.637 - 6.684: 98.5092% ( 2) 00:11:14.901 6.921 - 6.969: 98.5171% ( 1) 00:11:14.901 7.111 - 7.159: 98.5406% ( 3) 00:11:14.901 7.538 - 7.585: 98.5485% ( 1) 00:11:14.901 7.585 - 7.633: 98.5563% ( 1) 00:11:14.901 7.680 - 7.727: 98.5641% ( 1) 00:11:14.901 8.012 - 8.059: 98.5720% ( 1) 00:11:14.901 8.296 - 8.344: 98.5877% ( 2) 00:11:14.901 8.344 - 8.391: 98.5955% ( 1) 00:11:14.901 8.581 - 8.628: 98.6034% ( 1) 00:11:14.901 9.244 - 9.292: 98.6112% ( 1) 00:11:14.901 10.193 - 10.240: 98.6191% ( 1) 00:11:14.901 10.477 - 10.524: 98.6269% ( 1) 00:11:14.901 10.524 - 10.572: 98.6348% ( 1) 00:11:14.901 14.886 - 14.981: 98.6426% ( 1) 00:11:14.901 15.455 - 15.550: 98.6505%[2024-05-15 00:26:41.026823] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:15.160 ( 1) 00:11:15.160 15.550 - 15.644: 98.6661% ( 2) 00:11:15.160 15.644 - 15.739: 98.6740% ( 1) 00:11:15.160 15.739 - 15.834: 98.6975% ( 3) 00:11:15.160 15.834 - 15.929: 98.7211% ( 3) 00:11:15.160 15.929 - 16.024: 98.7525% ( 4) 00:11:15.160 16.024 - 16.119: 98.8152% ( 8) 00:11:15.160 16.119 - 16.213: 98.8466% ( 4) 00:11:15.160 16.213 - 16.308: 98.9015% ( 7) 00:11:15.160 16.308 - 16.403: 98.9172% ( 2) 00:11:15.160 16.403 - 16.498: 98.9643% ( 6) 00:11:15.160 16.498 - 16.593: 99.0349% ( 9) 00:11:15.160 16.593 - 16.687: 99.0898% ( 7) 00:11:15.160 16.687 - 16.782: 99.1448% ( 7) 00:11:15.160 16.782 - 16.877: 99.1761% ( 4) 00:11:15.160 16.877 - 16.972: 99.2075% ( 4) 00:11:15.160 16.972 - 17.067: 99.2232% ( 2) 00:11:15.160 17.067 - 17.161: 99.2468% ( 3) 00:11:15.160 17.161 - 17.256: 99.2546% ( 1) 00:11:15.160 17.256 - 17.351: 99.2703% ( 2) 00:11:15.160 17.351 - 17.446: 99.2781% ( 1) 00:11:15.160 17.446 - 17.541: 99.2860% ( 1) 00:11:15.160 17.636 - 17.730: 99.3095% ( 3) 00:11:15.160 17.920 - 18.015: 99.3174% ( 1) 00:11:15.160 18.299 - 18.394: 99.3409% ( 3) 00:11:15.160 18.868 - 18.963: 99.3488% ( 1) 00:11:15.160 19.911 - 20.006: 99.3566% ( 1) 00:11:15.160 24.273 - 24.462: 99.3645% ( 1) 00:11:15.160 2123.852 - 2135.988: 99.3723% ( 1) 00:11:15.160 3301.073 - 3325.345: 99.3801% ( 1) 00:11:15.160 3980.705 - 4004.978: 99.8195% ( 56) 00:11:15.160 4004.978 - 4029.250: 99.9922% ( 22) 00:11:15.160 5000.154 - 5024.427: 100.0000% ( 1) 00:11:15.160 00:11:15.160 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:15.160 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:15.160 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:15.160 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:15.160 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:15.418 [ 00:11:15.418 { 00:11:15.418 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:15.418 "subtype": "Discovery", 00:11:15.418 "listen_addresses": [], 00:11:15.418 "allow_any_host": true, 00:11:15.418 "hosts": [] 00:11:15.418 }, 00:11:15.418 { 00:11:15.418 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:15.418 "subtype": "NVMe", 00:11:15.418 "listen_addresses": [ 00:11:15.418 { 00:11:15.418 "trtype": "VFIOUSER", 00:11:15.418 "adrfam": "IPv4", 00:11:15.418 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:15.418 "trsvcid": "0" 00:11:15.418 } 00:11:15.418 ], 00:11:15.418 "allow_any_host": true, 00:11:15.418 "hosts": [], 00:11:15.418 "serial_number": "SPDK1", 00:11:15.418 "model_number": "SPDK bdev Controller", 00:11:15.418 "max_namespaces": 32, 00:11:15.418 "min_cntlid": 1, 00:11:15.418 "max_cntlid": 65519, 00:11:15.418 "namespaces": [ 00:11:15.418 { 00:11:15.418 "nsid": 1, 00:11:15.418 "bdev_name": "Malloc1", 00:11:15.418 "name": "Malloc1", 00:11:15.418 "nguid": "9240B32552B248F887C27FDE0BB9964C", 00:11:15.418 "uuid": "9240b325-52b2-48f8-87c2-7fde0bb9964c" 00:11:15.418 }, 00:11:15.418 { 00:11:15.418 "nsid": 2, 00:11:15.418 "bdev_name": "Malloc3", 00:11:15.418 "name": "Malloc3", 00:11:15.418 "nguid": "C1F9536121244CAC85E0A278E4A4D04C", 00:11:15.418 "uuid": "c1f95361-2124-4cac-85e0-a278e4a4d04c" 00:11:15.418 } 00:11:15.418 ] 00:11:15.418 }, 00:11:15.418 { 00:11:15.418 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:15.418 "subtype": "NVMe", 00:11:15.418 "listen_addresses": [ 00:11:15.418 { 00:11:15.418 "trtype": "VFIOUSER", 00:11:15.418 "adrfam": "IPv4", 00:11:15.418 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:15.418 "trsvcid": "0" 00:11:15.418 } 00:11:15.418 ], 00:11:15.418 "allow_any_host": true, 00:11:15.418 "hosts": [], 00:11:15.418 "serial_number": "SPDK2", 00:11:15.418 "model_number": "SPDK bdev Controller", 00:11:15.418 "max_namespaces": 32, 00:11:15.418 "min_cntlid": 1, 00:11:15.418 "max_cntlid": 65519, 00:11:15.418 "namespaces": [ 00:11:15.418 { 00:11:15.418 "nsid": 1, 00:11:15.418 "bdev_name": "Malloc2", 00:11:15.418 "name": "Malloc2", 00:11:15.418 "nguid": "B3CEC35C0BA04B91A424E887751F8FE6", 00:11:15.418 "uuid": "b3cec35c-0ba0-4b91-a424-e887751f8fe6" 00:11:15.418 } 00:11:15.418 ] 00:11:15.418 } 00:11:15.418 ] 00:11:15.418 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:15.418 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=825516 00:11:15.418 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:15.418 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:15.418 00:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:11:15.418 00:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:15.418 00:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:15.418 00:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:11:15.418 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:15.418 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:15.418 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.418 [2024-05-15 00:26:41.538448] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:15.676 Malloc4 00:11:15.676 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:15.934 [2024-05-15 00:26:41.885074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:15.934 00:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:15.934 Asynchronous Event Request test 00:11:15.934 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:15.934 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:15.934 Registering asynchronous event callbacks... 00:11:15.934 Starting namespace attribute notice tests for all controllers... 00:11:15.934 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:15.934 aer_cb - Changed Namespace 00:11:15.934 Cleaning up... 00:11:16.192 [ 00:11:16.192 { 00:11:16.192 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:16.192 "subtype": "Discovery", 00:11:16.192 "listen_addresses": [], 00:11:16.192 "allow_any_host": true, 00:11:16.192 "hosts": [] 00:11:16.192 }, 00:11:16.192 { 00:11:16.192 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:16.192 "subtype": "NVMe", 00:11:16.192 "listen_addresses": [ 00:11:16.192 { 00:11:16.192 "trtype": "VFIOUSER", 00:11:16.192 "adrfam": "IPv4", 00:11:16.192 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:16.192 "trsvcid": "0" 00:11:16.192 } 00:11:16.192 ], 00:11:16.192 "allow_any_host": true, 00:11:16.192 "hosts": [], 00:11:16.192 "serial_number": "SPDK1", 00:11:16.192 "model_number": "SPDK bdev Controller", 00:11:16.192 "max_namespaces": 32, 00:11:16.192 "min_cntlid": 1, 00:11:16.192 "max_cntlid": 65519, 00:11:16.192 "namespaces": [ 00:11:16.192 { 00:11:16.192 "nsid": 1, 00:11:16.192 "bdev_name": "Malloc1", 00:11:16.192 "name": "Malloc1", 00:11:16.192 "nguid": "9240B32552B248F887C27FDE0BB9964C", 00:11:16.192 "uuid": "9240b325-52b2-48f8-87c2-7fde0bb9964c" 00:11:16.192 }, 00:11:16.192 { 00:11:16.192 "nsid": 2, 00:11:16.192 "bdev_name": "Malloc3", 00:11:16.192 "name": "Malloc3", 00:11:16.192 "nguid": "C1F9536121244CAC85E0A278E4A4D04C", 00:11:16.192 "uuid": "c1f95361-2124-4cac-85e0-a278e4a4d04c" 00:11:16.192 } 00:11:16.192 ] 00:11:16.192 }, 00:11:16.192 { 00:11:16.192 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:16.192 "subtype": "NVMe", 00:11:16.192 "listen_addresses": [ 00:11:16.192 { 00:11:16.192 "trtype": "VFIOUSER", 00:11:16.192 "adrfam": "IPv4", 00:11:16.192 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:16.192 "trsvcid": "0" 00:11:16.192 } 00:11:16.192 ], 00:11:16.192 "allow_any_host": true, 00:11:16.192 "hosts": [], 00:11:16.192 "serial_number": "SPDK2", 00:11:16.192 "model_number": "SPDK bdev Controller", 00:11:16.192 "max_namespaces": 32, 00:11:16.192 "min_cntlid": 1, 00:11:16.192 "max_cntlid": 65519, 00:11:16.192 "namespaces": [ 00:11:16.192 { 00:11:16.192 "nsid": 1, 00:11:16.192 "bdev_name": "Malloc2", 00:11:16.192 "name": "Malloc2", 00:11:16.192 "nguid": "B3CEC35C0BA04B91A424E887751F8FE6", 00:11:16.192 "uuid": "b3cec35c-0ba0-4b91-a424-e887751f8fe6" 00:11:16.192 }, 00:11:16.192 { 00:11:16.192 "nsid": 2, 00:11:16.192 "bdev_name": "Malloc4", 00:11:16.192 "name": "Malloc4", 00:11:16.192 "nguid": "4F6BAEE414B84B55946DCF7B9CF4B46F", 00:11:16.192 "uuid": "4f6baee4-14b8-4b55-946d-cf7b9cf4b46f" 00:11:16.192 } 00:11:16.192 ] 00:11:16.192 } 00:11:16.192 ] 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 825516 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 819904 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 819904 ']' 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 819904 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 819904 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 819904' 00:11:16.192 killing process with pid 819904 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 819904 00:11:16.192 [2024-05-15 00:26:42.181088] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:16.192 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 819904 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=825659 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 825659' 00:11:16.449 Process pid: 825659 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 825659 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 825659 ']' 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.449 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:16.450 00:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:16.707 [2024-05-15 00:26:42.621131] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:16.707 [2024-05-15 00:26:42.622185] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:11:16.707 [2024-05-15 00:26:42.622274] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.707 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.707 [2024-05-15 00:26:42.695667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.707 [2024-05-15 00:26:42.812654] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.707 [2024-05-15 00:26:42.812714] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.707 [2024-05-15 00:26:42.812740] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.707 [2024-05-15 00:26:42.812753] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.707 [2024-05-15 00:26:42.812773] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.707 [2024-05-15 00:26:42.812861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.707 [2024-05-15 00:26:42.812941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.707 [2024-05-15 00:26:42.812997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.707 [2024-05-15 00:26:42.813001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.994 [2024-05-15 00:26:42.927977] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:16.994 [2024-05-15 00:26:42.928177] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:16.994 [2024-05-15 00:26:42.928424] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:16.994 [2024-05-15 00:26:42.929081] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:16.994 [2024-05-15 00:26:42.929329] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:17.560 00:26:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:17.560 00:26:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:11:17.560 00:26:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:18.490 00:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:18.747 00:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:18.747 00:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:18.748 00:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:18.748 00:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:18.748 00:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:19.006 Malloc1 00:11:19.006 00:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:19.264 00:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:19.521 00:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:19.778 [2024-05-15 00:26:45.801610] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:19.778 00:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:19.778 00:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:19.778 00:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:20.035 Malloc2 00:11:20.035 00:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:20.291 00:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:20.547 00:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 825659 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 825659 ']' 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 825659 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 825659 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 825659' 00:11:20.804 killing process with pid 825659 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 825659 00:11:20.804 [2024-05-15 00:26:46.845875] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:20.804 00:26:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 825659 00:11:21.061 00:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:21.061 00:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:21.061 00:11:21.061 real 0m53.859s 00:11:21.061 user 3m31.996s 00:11:21.061 sys 0m4.940s 00:11:21.061 00:26:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:21.061 00:26:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:21.061 ************************************ 00:11:21.061 END TEST nvmf_vfio_user 00:11:21.061 ************************************ 00:11:21.061 00:26:47 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:21.061 00:26:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:21.061 00:26:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:21.061 00:26:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:21.319 ************************************ 00:11:21.319 START TEST nvmf_vfio_user_nvme_compliance 00:11:21.319 ************************************ 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:21.319 * Looking for test storage... 00:11:21.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.319 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=826276 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 826276' 00:11:21.320 Process pid: 826276 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 826276 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@828 -- # '[' -z 826276 ']' 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:21.320 00:26:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:21.320 [2024-05-15 00:26:47.362202] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:11:21.320 [2024-05-15 00:26:47.362286] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.320 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.320 [2024-05-15 00:26:47.444152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.578 [2024-05-15 00:26:47.570723] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.578 [2024-05-15 00:26:47.570779] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.578 [2024-05-15 00:26:47.570795] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.578 [2024-05-15 00:26:47.570808] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.578 [2024-05-15 00:26:47.570820] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.578 [2024-05-15 00:26:47.570881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.578 [2024-05-15 00:26:47.570953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.578 [2024-05-15 00:26:47.570959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.509 00:26:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:22.509 00:26:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@861 -- # return 0 00:11:22.509 00:26:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:23.442 malloc0 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:23.442 [2024-05-15 00:26:49.421317] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.442 00:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:23.442 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.442 00:11:23.442 00:11:23.442 CUnit - A unit testing framework for C - Version 2.1-3 00:11:23.442 http://cunit.sourceforge.net/ 00:11:23.442 00:11:23.442 00:11:23.442 Suite: nvme_compliance 00:11:23.442 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 00:26:49.597857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:23.442 [2024-05-15 00:26:49.599368] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:23.442 [2024-05-15 00:26:49.599393] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:23.442 [2024-05-15 00:26:49.599405] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:23.442 [2024-05-15 00:26:49.602885] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:23.700 passed 00:11:23.700 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 00:26:49.688517] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:23.700 [2024-05-15 00:26:49.691533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:23.700 passed 00:11:23.700 Test: admin_identify_ns ...[2024-05-15 00:26:49.782614] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:23.700 [2024-05-15 00:26:49.842951] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:23.700 [2024-05-15 00:26:49.850946] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:23.958 [2024-05-15 00:26:49.872076] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:23.958 passed 00:11:23.958 Test: admin_get_features_mandatory_features ...[2024-05-15 00:26:49.955490] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:23.958 [2024-05-15 00:26:49.958508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:23.958 passed 00:11:23.958 Test: admin_get_features_optional_features ...[2024-05-15 00:26:50.045141] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:23.958 [2024-05-15 00:26:50.048158] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:23.958 passed 00:11:24.215 Test: admin_set_features_number_of_queues ...[2024-05-15 00:26:50.130486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:24.215 [2024-05-15 00:26:50.239099] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:24.215 passed 00:11:24.215 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 00:26:50.323200] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:24.215 [2024-05-15 00:26:50.326223] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:24.215 passed 00:11:24.472 Test: admin_get_log_page_with_lpo ...[2024-05-15 00:26:50.410825] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:24.472 [2024-05-15 00:26:50.475960] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:24.472 [2024-05-15 00:26:50.489043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:24.472 passed 00:11:24.472 Test: fabric_property_get ...[2024-05-15 00:26:50.576841] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:24.472 [2024-05-15 00:26:50.578154] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:24.472 [2024-05-15 00:26:50.579863] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:24.472 passed 00:11:24.730 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 00:26:50.666492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:24.730 [2024-05-15 00:26:50.667771] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:24.730 [2024-05-15 00:26:50.669517] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:24.730 passed 00:11:24.730 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 00:26:50.753589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:24.730 [2024-05-15 00:26:50.836942] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:24.730 [2024-05-15 00:26:50.852941] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:24.730 [2024-05-15 00:26:50.858049] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:24.988 passed 00:11:24.988 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 00:26:50.947498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:24.988 [2024-05-15 00:26:50.948761] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:24.988 [2024-05-15 00:26:50.950513] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:24.988 passed 00:11:24.988 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 00:26:51.033529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:24.988 [2024-05-15 00:26:51.108946] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:24.988 [2024-05-15 00:26:51.132942] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:24.988 [2024-05-15 00:26:51.138046] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:25.245 passed 00:11:25.245 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 00:26:51.226101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:25.245 [2024-05-15 00:26:51.227372] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:25.245 [2024-05-15 00:26:51.227409] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:25.245 [2024-05-15 00:26:51.229121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:25.245 passed 00:11:25.245 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 00:26:51.314338] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:25.245 [2024-05-15 00:26:51.405941] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:25.502 [2024-05-15 00:26:51.413938] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:25.502 [2024-05-15 00:26:51.421944] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:25.502 [2024-05-15 00:26:51.429942] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:25.502 [2024-05-15 00:26:51.459060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:25.502 passed 00:11:25.502 Test: admin_create_io_sq_verify_pc ...[2024-05-15 00:26:51.545251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:25.502 [2024-05-15 00:26:51.560968] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:25.502 [2024-05-15 00:26:51.578520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:25.502 passed 00:11:25.502 Test: admin_create_io_qp_max_qps ...[2024-05-15 00:26:51.664112] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:26.874 [2024-05-15 00:26:52.775949] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:27.131 [2024-05-15 00:26:53.157894] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:27.131 passed 00:11:27.131 Test: admin_create_io_sq_shared_cq ...[2024-05-15 00:26:53.244323] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:27.388 [2024-05-15 00:26:53.375965] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:27.388 [2024-05-15 00:26:53.413028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:27.388 passed 00:11:27.388 00:11:27.388 Run Summary: Type Total Ran Passed Failed Inactive 00:11:27.388 suites 1 1 n/a 0 0 00:11:27.388 tests 18 18 18 0 0 00:11:27.388 asserts 360 360 360 0 n/a 00:11:27.388 00:11:27.388 Elapsed time = 1.584 seconds 00:11:27.388 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 826276 00:11:27.388 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # '[' -z 826276 ']' 00:11:27.388 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # kill -0 826276 00:11:27.388 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # uname 00:11:27.388 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:27.388 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 826276 00:11:27.388 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:27.388 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:27.388 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # echo 'killing process with pid 826276' 00:11:27.388 killing process with pid 826276 00:11:27.388 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # kill 826276 00:11:27.388 [2024-05-15 00:26:53.498426] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:27.388 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # wait 826276 00:11:27.646 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:27.646 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:27.646 00:11:27.646 real 0m6.559s 00:11:27.646 user 0m18.583s 00:11:27.646 sys 0m0.649s 00:11:27.646 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:27.646 00:26:53 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:27.646 ************************************ 00:11:27.646 END TEST nvmf_vfio_user_nvme_compliance 00:11:27.646 ************************************ 00:11:27.904 00:26:53 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:27.904 00:26:53 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:27.904 00:26:53 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:27.904 00:26:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.904 ************************************ 00:11:27.904 START TEST nvmf_vfio_user_fuzz 00:11:27.904 ************************************ 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:27.904 * Looking for test storage... 00:11:27.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=827118 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 827118' 00:11:27.904 Process pid: 827118 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 827118 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@828 -- # '[' -z 827118 ']' 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:27.904 00:26:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:28.162 00:26:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:28.162 00:26:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@861 -- # return 0 00:11:28.162 00:26:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:29.534 malloc0 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:29.534 00:26:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:01.588 Fuzzing completed. Shutting down the fuzz application 00:12:01.588 00:12:01.588 Dumping successful admin opcodes: 00:12:01.588 8, 9, 10, 24, 00:12:01.588 Dumping successful io opcodes: 00:12:01.588 0, 00:12:01.588 NS: 0x200003a1ef00 I/O qp, Total commands completed: 634510, total successful commands: 2458, random_seed: 1386027456 00:12:01.588 NS: 0x200003a1ef00 admin qp, Total commands completed: 134686, total successful commands: 1086, random_seed: 1390325120 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 827118 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # '[' -z 827118 ']' 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # kill -0 827118 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # uname 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 827118 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # echo 'killing process with pid 827118' 00:12:01.588 killing process with pid 827118 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # kill 827118 00:12:01.588 00:27:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # wait 827118 00:12:01.588 00:27:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:01.588 00:27:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:01.588 00:12:01.588 real 0m32.435s 00:12:01.588 user 0m33.321s 00:12:01.588 sys 0m26.733s 00:12:01.588 00:27:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:01.588 00:27:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:01.588 ************************************ 00:12:01.588 END TEST nvmf_vfio_user_fuzz 00:12:01.588 ************************************ 00:12:01.588 00:27:26 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:01.588 00:27:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:01.588 00:27:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:01.588 00:27:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:01.588 ************************************ 00:12:01.588 START TEST nvmf_host_management 00:12:01.588 ************************************ 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:01.588 * Looking for test storage... 00:12:01.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.588 00:27:26 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.589 00:27:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:02.965 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:02.965 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:02.965 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:02.965 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:02.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:12:02.965 00:12:02.965 --- 10.0.0.2 ping statistics --- 00:12:02.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.965 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:12:02.965 00:12:02.965 --- 10.0.0.1 ping statistics --- 00:12:02.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.965 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:02.965 00:27:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=833482 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 833482 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 833482 ']' 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:02.966 00:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:02.966 [2024-05-15 00:27:28.983319] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:12:02.966 [2024-05-15 00:27:28.983411] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.966 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.966 [2024-05-15 00:27:29.060055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.224 [2024-05-15 00:27:29.172363] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.224 [2024-05-15 00:27:29.172417] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.224 [2024-05-15 00:27:29.172449] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.224 [2024-05-15 00:27:29.172460] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.224 [2024-05-15 00:27:29.172471] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.224 [2024-05-15 00:27:29.172606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.224 [2024-05-15 00:27:29.172861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.224 [2024-05-15 00:27:29.172921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:03.224 [2024-05-15 00:27:29.172923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.224 [2024-05-15 00:27:29.321566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.224 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.224 Malloc0 00:12:03.224 [2024-05-15 00:27:29.380391] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:03.224 [2024-05-15 00:27:29.380688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=833574 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 833574 /var/tmp/bdevperf.sock 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 833574 ']' 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:03.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:03.482 { 00:12:03.482 "params": { 00:12:03.482 "name": "Nvme$subsystem", 00:12:03.482 "trtype": "$TEST_TRANSPORT", 00:12:03.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:03.482 "adrfam": "ipv4", 00:12:03.482 "trsvcid": "$NVMF_PORT", 00:12:03.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:03.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:03.482 "hdgst": ${hdgst:-false}, 00:12:03.482 "ddgst": ${ddgst:-false} 00:12:03.482 }, 00:12:03.482 "method": "bdev_nvme_attach_controller" 00:12:03.482 } 00:12:03.482 EOF 00:12:03.482 )") 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:03.482 00:27:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:03.482 "params": { 00:12:03.482 "name": "Nvme0", 00:12:03.482 "trtype": "tcp", 00:12:03.482 "traddr": "10.0.0.2", 00:12:03.482 "adrfam": "ipv4", 00:12:03.482 "trsvcid": "4420", 00:12:03.482 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:03.482 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:03.482 "hdgst": false, 00:12:03.482 "ddgst": false 00:12:03.482 }, 00:12:03.482 "method": "bdev_nvme_attach_controller" 00:12:03.482 }' 00:12:03.482 [2024-05-15 00:27:29.453323] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:12:03.482 [2024-05-15 00:27:29.453400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833574 ] 00:12:03.482 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.482 [2024-05-15 00:27:29.528947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.482 [2024-05-15 00:27:29.638781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.047 Running I/O for 10 seconds... 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=10 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 10 -ge 100 ']' 00:12:04.047 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:04.307 [2024-05-15 00:27:30.375961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.376039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.376054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.376067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.376078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.376090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.376102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.376114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.376126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.376138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.376150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.376162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2190ab0 is same with the state(5) to be set 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.307 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:04.307 [2024-05-15 00:27:30.380724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.307 [2024-05-15 00:27:30.380767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.380785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.307 [2024-05-15 00:27:30.380799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.380813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.307 [2024-05-15 00:27:30.380826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.380840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.307 [2024-05-15 00:27:30.380853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.380866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b5990 is same with the state(5) to be set 00:12:04.307 [2024-05-15 00:27:30.381440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.381971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.381987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.382001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.382015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.382029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.382044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.382058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.382073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.382087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.307 [2024-05-15 00:27:30.382102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.307 [2024-05-15 00:27:30.382116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.382975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.382995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.383027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.383056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.383085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.383114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.383143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.383171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.383200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.383229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.383258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.383286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.308 [2024-05-15 00:27:30.383322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.308 [2024-05-15 00:27:30.383336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.309 [2024-05-15 00:27:30.383351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.309 [2024-05-15 00:27:30.383367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.309 [2024-05-15 00:27:30.383385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:04.309 [2024-05-15 00:27:30.383400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.309 [2024-05-15 00:27:30.383492] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24e6f20 was disconnected and freed. reset controller. 00:12:04.309 [2024-05-15 00:27:30.384595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:04.309 task offset: 57344 on job bdev=Nvme0n1 fails 00:12:04.309 00:12:04.309 Latency(us) 00:12:04.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.309 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:04.309 Job: Nvme0n1 ended in about 0.40 seconds with error 00:12:04.309 Verification LBA range: start 0x0 length 0x400 00:12:04.309 Nvme0n1 : 0.40 1123.76 70.24 160.54 0.00 48489.84 2463.67 46215.02 00:12:04.309 =================================================================================================================== 00:12:04.309 Total : 1123.76 70.24 160.54 0.00 48489.84 2463.67 46215.02 00:12:04.309 [2024-05-15 00:27:30.386501] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:04.309 [2024-05-15 00:27:30.386545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b5990 (9): Bad file descriptor 00:12:04.309 00:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.309 00:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:04.309 [2024-05-15 00:27:30.438942] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:05.240 00:27:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 833574 00:12:05.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (833574) - No such process 00:12:05.240 00:27:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:05.240 00:27:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:05.240 00:27:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:05.240 00:27:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:05.240 00:27:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:05.241 00:27:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:05.241 00:27:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:05.241 00:27:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:05.241 { 00:12:05.241 "params": { 00:12:05.241 "name": "Nvme$subsystem", 00:12:05.241 "trtype": "$TEST_TRANSPORT", 00:12:05.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:05.241 "adrfam": "ipv4", 00:12:05.241 "trsvcid": "$NVMF_PORT", 00:12:05.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:05.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:05.241 "hdgst": ${hdgst:-false}, 00:12:05.241 "ddgst": ${ddgst:-false} 00:12:05.241 }, 00:12:05.241 "method": "bdev_nvme_attach_controller" 00:12:05.241 } 00:12:05.241 EOF 00:12:05.241 )") 00:12:05.241 00:27:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:05.241 00:27:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:05.241 00:27:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:05.241 00:27:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:05.241 "params": { 00:12:05.241 "name": "Nvme0", 00:12:05.241 "trtype": "tcp", 00:12:05.241 "traddr": "10.0.0.2", 00:12:05.241 "adrfam": "ipv4", 00:12:05.241 "trsvcid": "4420", 00:12:05.241 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:05.241 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:05.241 "hdgst": false, 00:12:05.241 "ddgst": false 00:12:05.241 }, 00:12:05.241 "method": "bdev_nvme_attach_controller" 00:12:05.241 }' 00:12:05.499 [2024-05-15 00:27:31.438924] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:12:05.499 [2024-05-15 00:27:31.439026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833804 ] 00:12:05.499 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.499 [2024-05-15 00:27:31.512414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.499 [2024-05-15 00:27:31.625037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.065 Running I/O for 1 seconds... 00:12:06.998 00:12:06.998 Latency(us) 00:12:06.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.998 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:06.998 Verification LBA range: start 0x0 length 0x400 00:12:06.998 Nvme0n1 : 1.06 1024.32 64.02 0.00 0.00 61666.17 15243.19 46603.38 00:12:06.998 =================================================================================================================== 00:12:06.998 Total : 1024.32 64.02 0.00 0.00 61666.17 15243.19 46603.38 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:07.286 rmmod nvme_tcp 00:12:07.286 rmmod nvme_fabrics 00:12:07.286 rmmod nvme_keyring 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 833482 ']' 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 833482 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 833482 ']' 00:12:07.286 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 833482 00:12:07.287 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:12:07.287 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:07.287 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 833482 00:12:07.287 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:12:07.287 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:12:07.287 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 833482' 00:12:07.287 killing process with pid 833482 00:12:07.287 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 833482 00:12:07.287 [2024-05-15 00:27:33.395278] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:07.287 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 833482 00:12:07.546 [2024-05-15 00:27:33.659671] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:07.546 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.546 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.546 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.546 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.546 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.546 00:27:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.546 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.546 00:27:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.079 00:27:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:10.079 00:27:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:10.079 00:12:10.079 real 0m9.369s 00:12:10.079 user 0m21.329s 00:12:10.079 sys 0m2.957s 00:12:10.079 00:27:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:10.079 00:27:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:10.079 ************************************ 00:12:10.079 END TEST nvmf_host_management 00:12:10.079 ************************************ 00:12:10.080 00:27:35 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:10.080 00:27:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:10.080 00:27:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:10.080 00:27:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:10.080 ************************************ 00:12:10.080 START TEST nvmf_lvol 00:12:10.080 ************************************ 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:10.080 * Looking for test storage... 00:12:10.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.080 00:27:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:12.615 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:12.615 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:12.615 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:12.615 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:12.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:12:12.615 00:12:12.615 --- 10.0.0.2 ping statistics --- 00:12:12.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.615 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:12:12.615 00:12:12.615 --- 10.0.0.1 ping statistics --- 00:12:12.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.615 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=836321 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 836321 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 836321 ']' 00:12:12.615 00:27:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.616 00:27:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:12.616 00:27:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.616 00:27:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:12.616 00:27:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:12.616 [2024-05-15 00:27:38.558372] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:12:12.616 [2024-05-15 00:27:38.558457] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.616 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.616 [2024-05-15 00:27:38.636062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:12.616 [2024-05-15 00:27:38.745314] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.616 [2024-05-15 00:27:38.745373] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.616 [2024-05-15 00:27:38.745402] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.616 [2024-05-15 00:27:38.745413] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.616 [2024-05-15 00:27:38.745422] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.616 [2024-05-15 00:27:38.745486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.616 [2024-05-15 00:27:38.745557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.616 [2024-05-15 00:27:38.745560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.549 00:27:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:13.549 00:27:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:12:13.549 00:27:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:13.549 00:27:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:13.549 00:27:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:13.549 00:27:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.549 00:27:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:13.807 [2024-05-15 00:27:39.764584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.807 00:27:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:14.064 00:27:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:14.064 00:27:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:14.322 00:27:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:14.322 00:27:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:14.580 00:27:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:14.838 00:27:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d6e3dc3e-ac93-49aa-b024-5978cdc07a30 00:12:14.838 00:27:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d6e3dc3e-ac93-49aa-b024-5978cdc07a30 lvol 20 00:12:15.125 00:27:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a6d71fb0-1a33-4290-924f-7557b95ea92f 00:12:15.125 00:27:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:15.382 00:27:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a6d71fb0-1a33-4290-924f-7557b95ea92f 00:12:15.640 00:27:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:15.640 [2024-05-15 00:27:41.801087] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:15.640 [2024-05-15 00:27:41.801439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.898 00:27:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:16.156 00:27:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=836858 00:12:16.156 00:27:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:16.156 00:27:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:16.156 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.090 00:27:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a6d71fb0-1a33-4290-924f-7557b95ea92f MY_SNAPSHOT 00:12:17.347 00:27:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=73a6403b-317c-4abe-afe2-d595cbbf98ad 00:12:17.347 00:27:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a6d71fb0-1a33-4290-924f-7557b95ea92f 30 00:12:17.605 00:27:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 73a6403b-317c-4abe-afe2-d595cbbf98ad MY_CLONE 00:12:17.863 00:27:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a87edd29-0bf9-4de3-98b6-bab46220e167 00:12:17.863 00:27:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a87edd29-0bf9-4de3-98b6-bab46220e167 00:12:18.428 00:27:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 836858 00:12:26.532 Initializing NVMe Controllers 00:12:26.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:26.532 Controller IO queue size 128, less than required. 00:12:26.532 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:26.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:26.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:26.532 Initialization complete. Launching workers. 00:12:26.532 ======================================================== 00:12:26.532 Latency(us) 00:12:26.532 Device Information : IOPS MiB/s Average min max 00:12:26.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10891.00 42.54 11758.05 2294.97 80122.94 00:12:26.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10872.50 42.47 11776.53 2009.63 58827.57 00:12:26.532 ======================================================== 00:12:26.532 Total : 21763.50 85.01 11767.28 2009.63 80122.94 00:12:26.532 00:12:26.532 00:27:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:26.789 00:27:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a6d71fb0-1a33-4290-924f-7557b95ea92f 00:12:27.047 00:27:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6e3dc3e-ac93-49aa-b024-5978cdc07a30 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.305 rmmod nvme_tcp 00:12:27.305 rmmod nvme_fabrics 00:12:27.305 rmmod nvme_keyring 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 836321 ']' 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 836321 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 836321 ']' 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 836321 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 836321 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 836321' 00:12:27.305 killing process with pid 836321 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 836321 00:12:27.305 [2024-05-15 00:27:53.428128] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:27.305 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 836321 00:12:27.871 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.871 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.871 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.871 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.871 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.871 00:27:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.871 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.871 00:27:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.777 00:12:29.777 real 0m20.022s 00:12:29.777 user 1m6.853s 00:12:29.777 sys 0m5.912s 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:29.777 ************************************ 00:12:29.777 END TEST nvmf_lvol 00:12:29.777 ************************************ 00:12:29.777 00:27:55 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:29.777 00:27:55 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:29.777 00:27:55 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:29.777 00:27:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.777 ************************************ 00:12:29.777 START TEST nvmf_lvs_grow 00:12:29.777 ************************************ 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:29.777 * Looking for test storage... 00:12:29.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.777 00:27:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.778 00:27:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:32.310 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:32.310 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:32.310 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:32.310 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:32.310 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:32.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:12:32.568 00:12:32.568 --- 10.0.0.2 ping statistics --- 00:12:32.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.568 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:12:32.568 00:12:32.568 --- 10.0.0.1 ping statistics --- 00:12:32.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.568 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:32.568 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.569 00:27:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:32.569 00:27:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:32.569 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=840410 00:12:32.569 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:32.569 00:27:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 840410 00:12:32.569 00:27:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 840410 ']' 00:12:32.569 00:27:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.569 00:27:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:32.569 00:27:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.569 00:27:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:32.569 00:27:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:32.569 [2024-05-15 00:27:58.589738] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:12:32.569 [2024-05-15 00:27:58.589824] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.569 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.569 [2024-05-15 00:27:58.670300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.827 [2024-05-15 00:27:58.785145] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.827 [2024-05-15 00:27:58.785206] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.827 [2024-05-15 00:27:58.785222] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.827 [2024-05-15 00:27:58.785244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.827 [2024-05-15 00:27:58.785256] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.827 [2024-05-15 00:27:58.785293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.393 00:27:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:33.393 00:27:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:12:33.393 00:27:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:33.393 00:27:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:33.393 00:27:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:33.393 00:27:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.393 00:27:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:33.651 [2024-05-15 00:27:59.771005] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.651 00:27:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:33.651 00:27:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:33.651 00:27:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:33.651 00:27:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:33.908 ************************************ 00:12:33.908 START TEST lvs_grow_clean 00:12:33.908 ************************************ 00:12:33.908 00:27:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:12:33.908 00:27:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:33.908 00:27:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:33.908 00:27:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:33.908 00:27:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:33.908 00:27:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:33.908 00:27:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:33.908 00:27:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:33.908 00:27:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:33.908 00:27:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:34.166 00:28:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:34.166 00:28:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:34.424 00:28:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:34.424 00:28:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:34.424 00:28:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:34.682 00:28:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:34.682 00:28:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:34.682 00:28:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d lvol 150 00:12:34.941 00:28:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7688a58b-1ca7-4849-9d4b-d5d0687d8a8f 00:12:34.941 00:28:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:34.941 00:28:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:34.941 [2024-05-15 00:28:01.100172] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:34.941 [2024-05-15 00:28:01.100265] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:34.941 true 00:12:35.199 00:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:35.199 00:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:35.199 00:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:35.199 00:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:35.457 00:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7688a58b-1ca7-4849-9d4b-d5d0687d8a8f 00:12:35.715 00:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:35.972 [2024-05-15 00:28:02.074872] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:35.972 [2024-05-15 00:28:02.075231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.972 00:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:36.269 00:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=840973 00:12:36.269 00:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:36.269 00:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 840973 /var/tmp/bdevperf.sock 00:12:36.269 00:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:36.269 00:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 840973 ']' 00:12:36.269 00:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:36.269 00:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:36.269 00:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:36.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:36.269 00:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:36.269 00:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:36.269 [2024-05-15 00:28:02.403862] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:12:36.269 [2024-05-15 00:28:02.403966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid840973 ] 00:12:36.527 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.527 [2024-05-15 00:28:02.480116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.527 [2024-05-15 00:28:02.596620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.459 00:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:37.459 00:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:12:37.459 00:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:37.716 Nvme0n1 00:12:37.716 00:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:37.974 [ 00:12:37.974 { 00:12:37.974 "name": "Nvme0n1", 00:12:37.974 "aliases": [ 00:12:37.974 "7688a58b-1ca7-4849-9d4b-d5d0687d8a8f" 00:12:37.974 ], 00:12:37.974 "product_name": "NVMe disk", 00:12:37.974 "block_size": 4096, 00:12:37.974 "num_blocks": 38912, 00:12:37.974 "uuid": "7688a58b-1ca7-4849-9d4b-d5d0687d8a8f", 00:12:37.974 "assigned_rate_limits": { 00:12:37.974 "rw_ios_per_sec": 0, 00:12:37.974 "rw_mbytes_per_sec": 0, 00:12:37.974 "r_mbytes_per_sec": 0, 00:12:37.974 "w_mbytes_per_sec": 0 00:12:37.974 }, 00:12:37.974 "claimed": false, 00:12:37.974 "zoned": false, 00:12:37.974 "supported_io_types": { 00:12:37.974 "read": true, 00:12:37.974 "write": true, 00:12:37.974 "unmap": true, 00:12:37.974 "write_zeroes": true, 00:12:37.974 "flush": true, 00:12:37.974 "reset": true, 00:12:37.974 "compare": true, 00:12:37.974 "compare_and_write": true, 00:12:37.974 "abort": true, 00:12:37.974 "nvme_admin": true, 00:12:37.974 "nvme_io": true 00:12:37.974 }, 00:12:37.974 "memory_domains": [ 00:12:37.974 { 00:12:37.974 "dma_device_id": "system", 00:12:37.974 "dma_device_type": 1 00:12:37.974 } 00:12:37.974 ], 00:12:37.974 "driver_specific": { 00:12:37.974 "nvme": [ 00:12:37.974 { 00:12:37.974 "trid": { 00:12:37.974 "trtype": "TCP", 00:12:37.974 "adrfam": "IPv4", 00:12:37.974 "traddr": "10.0.0.2", 00:12:37.974 "trsvcid": "4420", 00:12:37.974 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:37.974 }, 00:12:37.974 "ctrlr_data": { 00:12:37.974 "cntlid": 1, 00:12:37.974 "vendor_id": "0x8086", 00:12:37.974 "model_number": "SPDK bdev Controller", 00:12:37.974 "serial_number": "SPDK0", 00:12:37.974 "firmware_revision": "24.05", 00:12:37.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:37.974 "oacs": { 00:12:37.974 "security": 0, 00:12:37.974 "format": 0, 00:12:37.974 "firmware": 0, 00:12:37.974 "ns_manage": 0 00:12:37.974 }, 00:12:37.974 "multi_ctrlr": true, 00:12:37.974 "ana_reporting": false 00:12:37.974 }, 00:12:37.974 "vs": { 00:12:37.974 "nvme_version": "1.3" 00:12:37.974 }, 00:12:37.974 "ns_data": { 00:12:37.974 "id": 1, 00:12:37.974 "can_share": true 00:12:37.974 } 00:12:37.974 } 00:12:37.974 ], 00:12:37.974 "mp_policy": "active_passive" 00:12:37.974 } 00:12:37.974 } 00:12:37.974 ] 00:12:37.974 00:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=841125 00:12:37.975 00:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:37.975 00:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:37.975 Running I/O for 10 seconds... 00:12:39.349 Latency(us) 00:12:39.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.349 Nvme0n1 : 1.00 14210.00 55.51 0.00 0.00 0.00 0.00 0.00 00:12:39.349 =================================================================================================================== 00:12:39.349 Total : 14210.00 55.51 0.00 0.00 0.00 0.00 0.00 00:12:39.349 00:12:39.915 00:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:40.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.173 Nvme0n1 : 2.00 14337.00 56.00 0.00 0.00 0.00 0.00 0.00 00:12:40.173 =================================================================================================================== 00:12:40.173 Total : 14337.00 56.00 0.00 0.00 0.00 0.00 0.00 00:12:40.173 00:12:40.173 true 00:12:40.173 00:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:40.173 00:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:40.431 00:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:40.431 00:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:40.431 00:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 841125 00:12:40.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.997 Nvme0n1 : 3.00 14486.33 56.59 0.00 0.00 0.00 0.00 0.00 00:12:40.997 =================================================================================================================== 00:12:40.997 Total : 14486.33 56.59 0.00 0.00 0.00 0.00 0.00 00:12:40.997 00:12:42.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.371 Nvme0n1 : 4.00 14496.50 56.63 0.00 0.00 0.00 0.00 0.00 00:12:42.371 =================================================================================================================== 00:12:42.371 Total : 14496.50 56.63 0.00 0.00 0.00 0.00 0.00 00:12:42.371 00:12:43.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.305 Nvme0n1 : 5.00 14515.60 56.70 0.00 0.00 0.00 0.00 0.00 00:12:43.305 =================================================================================================================== 00:12:43.305 Total : 14515.60 56.70 0.00 0.00 0.00 0.00 0.00 00:12:43.305 00:12:44.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.239 Nvme0n1 : 6.00 14603.17 57.04 0.00 0.00 0.00 0.00 0.00 00:12:44.239 =================================================================================================================== 00:12:44.239 Total : 14603.17 57.04 0.00 0.00 0.00 0.00 0.00 00:12:44.239 00:12:45.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.172 Nvme0n1 : 7.00 14656.43 57.25 0.00 0.00 0.00 0.00 0.00 00:12:45.172 =================================================================================================================== 00:12:45.172 Total : 14656.43 57.25 0.00 0.00 0.00 0.00 0.00 00:12:45.172 00:12:46.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.107 Nvme0n1 : 8.00 14680.38 57.35 0.00 0.00 0.00 0.00 0.00 00:12:46.107 =================================================================================================================== 00:12:46.107 Total : 14680.38 57.35 0.00 0.00 0.00 0.00 0.00 00:12:46.107 00:12:47.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.041 Nvme0n1 : 9.00 14734.33 57.56 0.00 0.00 0.00 0.00 0.00 00:12:47.041 =================================================================================================================== 00:12:47.041 Total : 14734.33 57.56 0.00 0.00 0.00 0.00 0.00 00:12:47.041 00:12:47.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.975 Nvme0n1 : 10.00 14770.10 57.70 0.00 0.00 0.00 0.00 0.00 00:12:47.975 =================================================================================================================== 00:12:47.975 Total : 14770.10 57.70 0.00 0.00 0.00 0.00 0.00 00:12:47.975 00:12:47.975 00:12:47.975 Latency(us) 00:12:47.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.975 Nvme0n1 : 10.01 14770.37 57.70 0.00 0.00 8659.74 4781.70 14854.83 00:12:47.975 =================================================================================================================== 00:12:47.975 Total : 14770.37 57.70 0.00 0.00 8659.74 4781.70 14854.83 00:12:47.975 0 00:12:48.233 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 840973 00:12:48.233 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 840973 ']' 00:12:48.233 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 840973 00:12:48.233 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:12:48.233 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:48.233 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 840973 00:12:48.233 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:12:48.233 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:12:48.233 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 840973' 00:12:48.233 killing process with pid 840973 00:12:48.233 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 840973 00:12:48.233 Received shutdown signal, test time was about 10.000000 seconds 00:12:48.233 00:12:48.233 Latency(us) 00:12:48.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.233 =================================================================================================================== 00:12:48.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:48.233 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 840973 00:12:48.491 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:48.749 00:28:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:49.007 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:49.007 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:49.265 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:49.265 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:49.265 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:49.523 [2024-05-15 00:28:15.568822] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:49.523 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:49.781 request: 00:12:49.781 { 00:12:49.781 "uuid": "df2563b2-46f7-42fe-88c9-0a7c8f9a068d", 00:12:49.781 "method": "bdev_lvol_get_lvstores", 00:12:49.781 "req_id": 1 00:12:49.781 } 00:12:49.781 Got JSON-RPC error response 00:12:49.781 response: 00:12:49.781 { 00:12:49.781 "code": -19, 00:12:49.781 "message": "No such device" 00:12:49.781 } 00:12:49.781 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:12:49.781 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:49.781 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:49.781 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:49.781 00:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:50.039 aio_bdev 00:12:50.039 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7688a58b-1ca7-4849-9d4b-d5d0687d8a8f 00:12:50.039 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=7688a58b-1ca7-4849-9d4b-d5d0687d8a8f 00:12:50.039 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:12:50.039 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:12:50.039 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:12:50.039 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:12:50.039 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:50.297 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7688a58b-1ca7-4849-9d4b-d5d0687d8a8f -t 2000 00:12:50.555 [ 00:12:50.555 { 00:12:50.555 "name": "7688a58b-1ca7-4849-9d4b-d5d0687d8a8f", 00:12:50.555 "aliases": [ 00:12:50.555 "lvs/lvol" 00:12:50.555 ], 00:12:50.555 "product_name": "Logical Volume", 00:12:50.555 "block_size": 4096, 00:12:50.555 "num_blocks": 38912, 00:12:50.555 "uuid": "7688a58b-1ca7-4849-9d4b-d5d0687d8a8f", 00:12:50.555 "assigned_rate_limits": { 00:12:50.555 "rw_ios_per_sec": 0, 00:12:50.555 "rw_mbytes_per_sec": 0, 00:12:50.555 "r_mbytes_per_sec": 0, 00:12:50.555 "w_mbytes_per_sec": 0 00:12:50.555 }, 00:12:50.555 "claimed": false, 00:12:50.555 "zoned": false, 00:12:50.555 "supported_io_types": { 00:12:50.555 "read": true, 00:12:50.555 "write": true, 00:12:50.555 "unmap": true, 00:12:50.555 "write_zeroes": true, 00:12:50.555 "flush": false, 00:12:50.555 "reset": true, 00:12:50.555 "compare": false, 00:12:50.555 "compare_and_write": false, 00:12:50.555 "abort": false, 00:12:50.555 "nvme_admin": false, 00:12:50.555 "nvme_io": false 00:12:50.555 }, 00:12:50.555 "driver_specific": { 00:12:50.555 "lvol": { 00:12:50.555 "lvol_store_uuid": "df2563b2-46f7-42fe-88c9-0a7c8f9a068d", 00:12:50.555 "base_bdev": "aio_bdev", 00:12:50.555 "thin_provision": false, 00:12:50.555 "num_allocated_clusters": 38, 00:12:50.555 "snapshot": false, 00:12:50.555 "clone": false, 00:12:50.555 "esnap_clone": false 00:12:50.555 } 00:12:50.555 } 00:12:50.555 } 00:12:50.555 ] 00:12:50.555 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:12:50.555 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:50.555 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:50.813 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:50.813 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:50.813 00:28:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:51.071 00:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:51.071 00:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7688a58b-1ca7-4849-9d4b-d5d0687d8a8f 00:12:51.329 00:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df2563b2-46f7-42fe-88c9-0a7c8f9a068d 00:12:51.587 00:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:51.877 00:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:51.877 00:12:51.877 real 0m18.134s 00:12:51.877 user 0m17.759s 00:12:51.877 sys 0m1.978s 00:12:51.877 00:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:51.877 00:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:51.877 ************************************ 00:12:51.877 END TEST lvs_grow_clean 00:12:51.877 ************************************ 00:12:51.877 00:28:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:51.877 00:28:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:51.877 00:28:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:51.877 00:28:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:52.135 ************************************ 00:12:52.135 START TEST lvs_grow_dirty 00:12:52.135 ************************************ 00:12:52.135 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:12:52.135 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:52.135 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:52.135 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:52.135 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:52.135 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:52.135 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:52.135 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:52.135 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:52.135 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:52.394 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:52.394 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:52.652 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=76c90302-9f29-457d-a71e-fecdbd9347bc 00:12:52.652 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:12:52.652 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:52.910 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:52.910 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:52.910 00:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 76c90302-9f29-457d-a71e-fecdbd9347bc lvol 150 00:12:53.168 00:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a69e834c-5908-4390-a905-ef8825913653 00:12:53.168 00:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:53.168 00:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:53.426 [2024-05-15 00:28:19.369333] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:53.426 [2024-05-15 00:28:19.369432] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:53.426 true 00:12:53.426 00:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:12:53.426 00:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:53.684 00:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:53.684 00:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:53.942 00:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a69e834c-5908-4390-a905-ef8825913653 00:12:54.199 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:54.457 [2024-05-15 00:28:20.404481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.457 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:54.715 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=843158 00:12:54.715 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:54.715 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:54.715 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 843158 /var/tmp/bdevperf.sock 00:12:54.715 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 843158 ']' 00:12:54.715 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:54.715 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:54.715 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:54.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:54.715 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:54.715 00:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:54.715 [2024-05-15 00:28:20.708434] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:12:54.715 [2024-05-15 00:28:20.708517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843158 ] 00:12:54.715 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.715 [2024-05-15 00:28:20.780567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.973 [2024-05-15 00:28:20.897953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.973 00:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:54.973 00:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:12:54.973 00:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:55.538 Nvme0n1 00:12:55.538 00:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:55.796 [ 00:12:55.796 { 00:12:55.796 "name": "Nvme0n1", 00:12:55.796 "aliases": [ 00:12:55.796 "a69e834c-5908-4390-a905-ef8825913653" 00:12:55.796 ], 00:12:55.796 "product_name": "NVMe disk", 00:12:55.796 "block_size": 4096, 00:12:55.796 "num_blocks": 38912, 00:12:55.796 "uuid": "a69e834c-5908-4390-a905-ef8825913653", 00:12:55.796 "assigned_rate_limits": { 00:12:55.796 "rw_ios_per_sec": 0, 00:12:55.796 "rw_mbytes_per_sec": 0, 00:12:55.796 "r_mbytes_per_sec": 0, 00:12:55.796 "w_mbytes_per_sec": 0 00:12:55.796 }, 00:12:55.796 "claimed": false, 00:12:55.796 "zoned": false, 00:12:55.796 "supported_io_types": { 00:12:55.796 "read": true, 00:12:55.796 "write": true, 00:12:55.796 "unmap": true, 00:12:55.796 "write_zeroes": true, 00:12:55.796 "flush": true, 00:12:55.796 "reset": true, 00:12:55.796 "compare": true, 00:12:55.796 "compare_and_write": true, 00:12:55.796 "abort": true, 00:12:55.796 "nvme_admin": true, 00:12:55.796 "nvme_io": true 00:12:55.796 }, 00:12:55.796 "memory_domains": [ 00:12:55.796 { 00:12:55.796 "dma_device_id": "system", 00:12:55.796 "dma_device_type": 1 00:12:55.796 } 00:12:55.796 ], 00:12:55.796 "driver_specific": { 00:12:55.796 "nvme": [ 00:12:55.796 { 00:12:55.796 "trid": { 00:12:55.796 "trtype": "TCP", 00:12:55.796 "adrfam": "IPv4", 00:12:55.796 "traddr": "10.0.0.2", 00:12:55.796 "trsvcid": "4420", 00:12:55.796 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:55.796 }, 00:12:55.796 "ctrlr_data": { 00:12:55.796 "cntlid": 1, 00:12:55.796 "vendor_id": "0x8086", 00:12:55.796 "model_number": "SPDK bdev Controller", 00:12:55.796 "serial_number": "SPDK0", 00:12:55.796 "firmware_revision": "24.05", 00:12:55.796 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:55.796 "oacs": { 00:12:55.796 "security": 0, 00:12:55.796 "format": 0, 00:12:55.796 "firmware": 0, 00:12:55.796 "ns_manage": 0 00:12:55.796 }, 00:12:55.796 "multi_ctrlr": true, 00:12:55.796 "ana_reporting": false 00:12:55.796 }, 00:12:55.796 "vs": { 00:12:55.796 "nvme_version": "1.3" 00:12:55.796 }, 00:12:55.796 "ns_data": { 00:12:55.796 "id": 1, 00:12:55.796 "can_share": true 00:12:55.796 } 00:12:55.796 } 00:12:55.796 ], 00:12:55.796 "mp_policy": "active_passive" 00:12:55.796 } 00:12:55.796 } 00:12:55.796 ] 00:12:55.796 00:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=843294 00:12:55.796 00:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:55.796 00:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:55.796 Running I/O for 10 seconds... 00:12:57.170 Latency(us) 00:12:57.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.170 Nvme0n1 : 1.00 13824.00 54.00 0.00 0.00 0.00 0.00 0.00 00:12:57.170 =================================================================================================================== 00:12:57.170 Total : 13824.00 54.00 0.00 0.00 0.00 0.00 0.00 00:12:57.170 00:12:57.736 00:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:12:57.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.994 Nvme0n1 : 2.00 14016.00 54.75 0.00 0.00 0.00 0.00 0.00 00:12:57.994 =================================================================================================================== 00:12:57.994 Total : 14016.00 54.75 0.00 0.00 0.00 0.00 0.00 00:12:57.994 00:12:57.994 true 00:12:57.994 00:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:12:57.994 00:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:58.253 00:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:58.253 00:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:58.253 00:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 843294 00:12:58.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.818 Nvme0n1 : 3.00 14145.00 55.25 0.00 0.00 0.00 0.00 0.00 00:12:58.818 =================================================================================================================== 00:12:58.818 Total : 14145.00 55.25 0.00 0.00 0.00 0.00 0.00 00:12:58.818 00:12:59.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:59.751 Nvme0n1 : 4.00 14240.00 55.62 0.00 0.00 0.00 0.00 0.00 00:12:59.751 =================================================================================================================== 00:12:59.751 Total : 14240.00 55.62 0.00 0.00 0.00 0.00 0.00 00:12:59.751 00:13:01.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:01.124 Nvme0n1 : 5.00 14284.80 55.80 0.00 0.00 0.00 0.00 0.00 00:13:01.124 =================================================================================================================== 00:13:01.125 Total : 14284.80 55.80 0.00 0.00 0.00 0.00 0.00 00:13:01.125 00:13:02.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.058 Nvme0n1 : 6.00 14357.33 56.08 0.00 0.00 0.00 0.00 0.00 00:13:02.058 =================================================================================================================== 00:13:02.058 Total : 14357.33 56.08 0.00 0.00 0.00 0.00 0.00 00:13:02.058 00:13:02.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.991 Nvme0n1 : 7.00 14400.43 56.25 0.00 0.00 0.00 0.00 0.00 00:13:02.991 =================================================================================================================== 00:13:02.991 Total : 14400.43 56.25 0.00 0.00 0.00 0.00 0.00 00:13:02.991 00:13:03.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.925 Nvme0n1 : 8.00 14424.00 56.34 0.00 0.00 0.00 0.00 0.00 00:13:03.925 =================================================================================================================== 00:13:03.925 Total : 14424.00 56.34 0.00 0.00 0.00 0.00 0.00 00:13:03.925 00:13:04.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:04.859 Nvme0n1 : 9.00 14466.22 56.51 0.00 0.00 0.00 0.00 0.00 00:13:04.859 =================================================================================================================== 00:13:04.859 Total : 14466.22 56.51 0.00 0.00 0.00 0.00 0.00 00:13:04.859 00:13:05.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.794 Nvme0n1 : 10.00 14488.30 56.59 0.00 0.00 0.00 0.00 0.00 00:13:05.794 =================================================================================================================== 00:13:05.794 Total : 14488.30 56.59 0.00 0.00 0.00 0.00 0.00 00:13:05.794 00:13:05.794 00:13:05.794 Latency(us) 00:13:05.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.794 Nvme0n1 : 10.00 14488.88 56.60 0.00 0.00 8827.96 5461.33 15728.64 00:13:05.794 =================================================================================================================== 00:13:05.794 Total : 14488.88 56.60 0.00 0.00 8827.96 5461.33 15728.64 00:13:05.794 0 00:13:05.794 00:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 843158 00:13:05.794 00:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 843158 ']' 00:13:05.794 00:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 843158 00:13:05.794 00:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:13:05.794 00:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:05.794 00:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 843158 00:13:06.052 00:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:06.052 00:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:06.052 00:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 843158' 00:13:06.052 killing process with pid 843158 00:13:06.052 00:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 843158 00:13:06.052 Received shutdown signal, test time was about 10.000000 seconds 00:13:06.052 00:13:06.052 Latency(us) 00:13:06.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.052 =================================================================================================================== 00:13:06.052 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:06.052 00:28:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 843158 00:13:06.311 00:28:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:06.569 00:28:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:06.827 00:28:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:13:06.827 00:28:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 840410 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 840410 00:13:07.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 840410 Killed "${NVMF_APP[@]}" "$@" 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=844620 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 844620 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 844620 ']' 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:07.085 00:28:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:07.085 [2024-05-15 00:28:33.118757] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:07.085 [2024-05-15 00:28:33.118849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.085 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.085 [2024-05-15 00:28:33.198358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.370 [2024-05-15 00:28:33.314544] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.370 [2024-05-15 00:28:33.314591] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.370 [2024-05-15 00:28:33.314619] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.370 [2024-05-15 00:28:33.314631] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.370 [2024-05-15 00:28:33.314641] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.370 [2024-05-15 00:28:33.314667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.946 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:07.946 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:13:07.946 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:07.946 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:07.946 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:07.946 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.946 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:08.204 [2024-05-15 00:28:34.366676] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:08.204 [2024-05-15 00:28:34.366814] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:08.204 [2024-05-15 00:28:34.366875] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:08.462 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:08.462 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a69e834c-5908-4390-a905-ef8825913653 00:13:08.462 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=a69e834c-5908-4390-a905-ef8825913653 00:13:08.462 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:13:08.462 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:13:08.462 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:13:08.462 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:13:08.462 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:08.720 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a69e834c-5908-4390-a905-ef8825913653 -t 2000 00:13:08.720 [ 00:13:08.720 { 00:13:08.720 "name": "a69e834c-5908-4390-a905-ef8825913653", 00:13:08.720 "aliases": [ 00:13:08.720 "lvs/lvol" 00:13:08.720 ], 00:13:08.720 "product_name": "Logical Volume", 00:13:08.720 "block_size": 4096, 00:13:08.720 "num_blocks": 38912, 00:13:08.720 "uuid": "a69e834c-5908-4390-a905-ef8825913653", 00:13:08.720 "assigned_rate_limits": { 00:13:08.720 "rw_ios_per_sec": 0, 00:13:08.720 "rw_mbytes_per_sec": 0, 00:13:08.720 "r_mbytes_per_sec": 0, 00:13:08.720 "w_mbytes_per_sec": 0 00:13:08.720 }, 00:13:08.720 "claimed": false, 00:13:08.720 "zoned": false, 00:13:08.720 "supported_io_types": { 00:13:08.720 "read": true, 00:13:08.720 "write": true, 00:13:08.720 "unmap": true, 00:13:08.720 "write_zeroes": true, 00:13:08.720 "flush": false, 00:13:08.720 "reset": true, 00:13:08.720 "compare": false, 00:13:08.720 "compare_and_write": false, 00:13:08.720 "abort": false, 00:13:08.720 "nvme_admin": false, 00:13:08.720 "nvme_io": false 00:13:08.720 }, 00:13:08.720 "driver_specific": { 00:13:08.720 "lvol": { 00:13:08.720 "lvol_store_uuid": "76c90302-9f29-457d-a71e-fecdbd9347bc", 00:13:08.720 "base_bdev": "aio_bdev", 00:13:08.720 "thin_provision": false, 00:13:08.720 "num_allocated_clusters": 38, 00:13:08.720 "snapshot": false, 00:13:08.720 "clone": false, 00:13:08.720 "esnap_clone": false 00:13:08.720 } 00:13:08.720 } 00:13:08.720 } 00:13:08.720 ] 00:13:08.979 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:13:08.979 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:13:08.979 00:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:08.979 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:08.979 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:13:08.979 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:09.238 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:09.238 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:09.497 [2024-05-15 00:28:35.611508] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:09.497 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:13:09.755 request: 00:13:09.755 { 00:13:09.755 "uuid": "76c90302-9f29-457d-a71e-fecdbd9347bc", 00:13:09.755 "method": "bdev_lvol_get_lvstores", 00:13:09.755 "req_id": 1 00:13:09.755 } 00:13:09.755 Got JSON-RPC error response 00:13:09.755 response: 00:13:09.755 { 00:13:09.755 "code": -19, 00:13:09.755 "message": "No such device" 00:13:09.755 } 00:13:09.755 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:13:09.755 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:09.755 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:09.755 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:09.755 00:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:10.012 aio_bdev 00:13:10.012 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a69e834c-5908-4390-a905-ef8825913653 00:13:10.013 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=a69e834c-5908-4390-a905-ef8825913653 00:13:10.013 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:13:10.013 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:13:10.013 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:13:10.013 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:13:10.013 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:10.271 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a69e834c-5908-4390-a905-ef8825913653 -t 2000 00:13:10.529 [ 00:13:10.529 { 00:13:10.529 "name": "a69e834c-5908-4390-a905-ef8825913653", 00:13:10.529 "aliases": [ 00:13:10.529 "lvs/lvol" 00:13:10.529 ], 00:13:10.529 "product_name": "Logical Volume", 00:13:10.529 "block_size": 4096, 00:13:10.529 "num_blocks": 38912, 00:13:10.529 "uuid": "a69e834c-5908-4390-a905-ef8825913653", 00:13:10.529 "assigned_rate_limits": { 00:13:10.529 "rw_ios_per_sec": 0, 00:13:10.529 "rw_mbytes_per_sec": 0, 00:13:10.529 "r_mbytes_per_sec": 0, 00:13:10.529 "w_mbytes_per_sec": 0 00:13:10.529 }, 00:13:10.529 "claimed": false, 00:13:10.529 "zoned": false, 00:13:10.529 "supported_io_types": { 00:13:10.529 "read": true, 00:13:10.529 "write": true, 00:13:10.529 "unmap": true, 00:13:10.529 "write_zeroes": true, 00:13:10.529 "flush": false, 00:13:10.529 "reset": true, 00:13:10.529 "compare": false, 00:13:10.529 "compare_and_write": false, 00:13:10.529 "abort": false, 00:13:10.529 "nvme_admin": false, 00:13:10.529 "nvme_io": false 00:13:10.529 }, 00:13:10.529 "driver_specific": { 00:13:10.529 "lvol": { 00:13:10.529 "lvol_store_uuid": "76c90302-9f29-457d-a71e-fecdbd9347bc", 00:13:10.529 "base_bdev": "aio_bdev", 00:13:10.529 "thin_provision": false, 00:13:10.529 "num_allocated_clusters": 38, 00:13:10.529 "snapshot": false, 00:13:10.529 "clone": false, 00:13:10.529 "esnap_clone": false 00:13:10.529 } 00:13:10.529 } 00:13:10.529 } 00:13:10.529 ] 00:13:10.529 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:13:10.529 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:13:10.529 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:10.787 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:10.787 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:13:10.787 00:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:11.045 00:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:11.045 00:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a69e834c-5908-4390-a905-ef8825913653 00:13:11.303 00:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 76c90302-9f29-457d-a71e-fecdbd9347bc 00:13:11.562 00:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:11.820 00:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:12.078 00:13:12.078 real 0m19.970s 00:13:12.078 user 0m50.004s 00:13:12.078 sys 0m5.065s 00:13:12.078 00:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:12.078 00:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:12.078 ************************************ 00:13:12.078 END TEST lvs_grow_dirty 00:13:12.078 ************************************ 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:12.078 nvmf_trace.0 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.078 rmmod nvme_tcp 00:13:12.078 rmmod nvme_fabrics 00:13:12.078 rmmod nvme_keyring 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 844620 ']' 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 844620 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 844620 ']' 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 844620 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:12.078 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 844620 00:13:12.079 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:12.079 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:12.079 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 844620' 00:13:12.079 killing process with pid 844620 00:13:12.079 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 844620 00:13:12.079 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 844620 00:13:12.338 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:12.338 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:12.338 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:12.338 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.338 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:12.338 00:28:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.338 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.338 00:28:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.873 00:28:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:14.873 00:13:14.873 real 0m44.609s 00:13:14.873 user 1m14.591s 00:13:14.873 sys 0m9.221s 00:13:14.873 00:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:14.873 00:28:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:14.873 ************************************ 00:13:14.873 END TEST nvmf_lvs_grow 00:13:14.873 ************************************ 00:13:14.873 00:28:40 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:14.873 00:28:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:14.873 00:28:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:14.873 00:28:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:14.873 ************************************ 00:13:14.873 START TEST nvmf_bdev_io_wait 00:13:14.873 ************************************ 00:13:14.873 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:14.873 * Looking for test storage... 00:13:14.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.873 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.873 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:14.873 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.873 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.873 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.873 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:14.874 00:28:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:17.406 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:17.406 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:17.406 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:17.406 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.406 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:13:17.407 00:13:17.407 --- 10.0.0.2 ping statistics --- 00:13:17.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.407 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:13:17.407 00:13:17.407 --- 10.0.0.1 ping statistics --- 00:13:17.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.407 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=847564 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 847564 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 847564 ']' 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:17.407 00:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:17.407 [2024-05-15 00:28:43.270646] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:17.407 [2024-05-15 00:28:43.270732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.407 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.407 [2024-05-15 00:28:43.351685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.407 [2024-05-15 00:28:43.463961] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.407 [2024-05-15 00:28:43.464023] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.407 [2024-05-15 00:28:43.464051] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.407 [2024-05-15 00:28:43.464062] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.407 [2024-05-15 00:28:43.464071] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.407 [2024-05-15 00:28:43.464167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.407 [2024-05-15 00:28:43.464231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.407 [2024-05-15 00:28:43.464302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.407 [2024-05-15 00:28:43.464305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:18.338 [2024-05-15 00:28:44.318255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:18.338 Malloc0 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:18.338 [2024-05-15 00:28:44.385514] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:18.338 [2024-05-15 00:28:44.385828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=847726 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=847728 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:18.338 { 00:13:18.338 "params": { 00:13:18.338 "name": "Nvme$subsystem", 00:13:18.338 "trtype": "$TEST_TRANSPORT", 00:13:18.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.338 "adrfam": "ipv4", 00:13:18.338 "trsvcid": "$NVMF_PORT", 00:13:18.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.338 "hdgst": ${hdgst:-false}, 00:13:18.338 "ddgst": ${ddgst:-false} 00:13:18.338 }, 00:13:18.338 "method": "bdev_nvme_attach_controller" 00:13:18.338 } 00:13:18.338 EOF 00:13:18.338 )") 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=847730 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:18.338 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:18.339 { 00:13:18.339 "params": { 00:13:18.339 "name": "Nvme$subsystem", 00:13:18.339 "trtype": "$TEST_TRANSPORT", 00:13:18.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.339 "adrfam": "ipv4", 00:13:18.339 "trsvcid": "$NVMF_PORT", 00:13:18.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.339 "hdgst": ${hdgst:-false}, 00:13:18.339 "ddgst": ${ddgst:-false} 00:13:18.339 }, 00:13:18.339 "method": "bdev_nvme_attach_controller" 00:13:18.339 } 00:13:18.339 EOF 00:13:18.339 )") 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=847733 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:18.339 { 00:13:18.339 "params": { 00:13:18.339 "name": "Nvme$subsystem", 00:13:18.339 "trtype": "$TEST_TRANSPORT", 00:13:18.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.339 "adrfam": "ipv4", 00:13:18.339 "trsvcid": "$NVMF_PORT", 00:13:18.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.339 "hdgst": ${hdgst:-false}, 00:13:18.339 "ddgst": ${ddgst:-false} 00:13:18.339 }, 00:13:18.339 "method": "bdev_nvme_attach_controller" 00:13:18.339 } 00:13:18.339 EOF 00:13:18.339 )") 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:18.339 { 00:13:18.339 "params": { 00:13:18.339 "name": "Nvme$subsystem", 00:13:18.339 "trtype": "$TEST_TRANSPORT", 00:13:18.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.339 "adrfam": "ipv4", 00:13:18.339 "trsvcid": "$NVMF_PORT", 00:13:18.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.339 "hdgst": ${hdgst:-false}, 00:13:18.339 "ddgst": ${ddgst:-false} 00:13:18.339 }, 00:13:18.339 "method": "bdev_nvme_attach_controller" 00:13:18.339 } 00:13:18.339 EOF 00:13:18.339 )") 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 847726 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:18.339 "params": { 00:13:18.339 "name": "Nvme1", 00:13:18.339 "trtype": "tcp", 00:13:18.339 "traddr": "10.0.0.2", 00:13:18.339 "adrfam": "ipv4", 00:13:18.339 "trsvcid": "4420", 00:13:18.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.339 "hdgst": false, 00:13:18.339 "ddgst": false 00:13:18.339 }, 00:13:18.339 "method": "bdev_nvme_attach_controller" 00:13:18.339 }' 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:18.339 "params": { 00:13:18.339 "name": "Nvme1", 00:13:18.339 "trtype": "tcp", 00:13:18.339 "traddr": "10.0.0.2", 00:13:18.339 "adrfam": "ipv4", 00:13:18.339 "trsvcid": "4420", 00:13:18.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.339 "hdgst": false, 00:13:18.339 "ddgst": false 00:13:18.339 }, 00:13:18.339 "method": "bdev_nvme_attach_controller" 00:13:18.339 }' 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:18.339 "params": { 00:13:18.339 "name": "Nvme1", 00:13:18.339 "trtype": "tcp", 00:13:18.339 "traddr": "10.0.0.2", 00:13:18.339 "adrfam": "ipv4", 00:13:18.339 "trsvcid": "4420", 00:13:18.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.339 "hdgst": false, 00:13:18.339 "ddgst": false 00:13:18.339 }, 00:13:18.339 "method": "bdev_nvme_attach_controller" 00:13:18.339 }' 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:18.339 00:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:18.339 "params": { 00:13:18.339 "name": "Nvme1", 00:13:18.339 "trtype": "tcp", 00:13:18.339 "traddr": "10.0.0.2", 00:13:18.339 "adrfam": "ipv4", 00:13:18.339 "trsvcid": "4420", 00:13:18.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.339 "hdgst": false, 00:13:18.339 "ddgst": false 00:13:18.339 }, 00:13:18.339 "method": "bdev_nvme_attach_controller" 00:13:18.339 }' 00:13:18.339 [2024-05-15 00:28:44.430915] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:18.339 [2024-05-15 00:28:44.430915] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:18.339 [2024-05-15 00:28:44.430915] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:18.339 [2024-05-15 00:28:44.431017] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 00:28:44.431017] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 00:28:44.431018] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:18.339 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:18.339 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:18.339 [2024-05-15 00:28:44.432301] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:18.339 [2024-05-15 00:28:44.432369] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:18.339 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.598 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.598 [2024-05-15 00:28:44.617047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.598 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.598 [2024-05-15 00:28:44.713354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:18.598 [2024-05-15 00:28:44.715907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.856 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.856 [2024-05-15 00:28:44.812558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.856 [2024-05-15 00:28:44.814484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:18.856 [2024-05-15 00:28:44.883021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.856 [2024-05-15 00:28:44.906524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:18.856 [2024-05-15 00:28:44.973000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:13:18.856 Running I/O for 1 seconds... 00:13:19.117 Running I/O for 1 seconds... 00:13:19.117 Running I/O for 1 seconds... 00:13:19.117 Running I/O for 1 seconds... 00:13:20.052 00:13:20.052 Latency(us) 00:13:20.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.052 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:20.052 Nvme1n1 : 1.01 9364.16 36.58 0.00 0.00 13610.67 7573.05 22913.33 00:13:20.052 =================================================================================================================== 00:13:20.052 Total : 9364.16 36.58 0.00 0.00 13610.67 7573.05 22913.33 00:13:20.052 00:13:20.052 Latency(us) 00:13:20.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.052 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:20.052 Nvme1n1 : 1.01 9444.67 36.89 0.00 0.00 13494.44 8058.50 23981.32 00:13:20.052 =================================================================================================================== 00:13:20.052 Total : 9444.67 36.89 0.00 0.00 13494.44 8058.50 23981.32 00:13:20.310 00:13:20.310 Latency(us) 00:13:20.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.310 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:20.310 Nvme1n1 : 1.01 8873.06 34.66 0.00 0.00 14349.88 4126.34 22719.15 00:13:20.310 =================================================================================================================== 00:13:20.310 Total : 8873.06 34.66 0.00 0.00 14349.88 4126.34 22719.15 00:13:20.310 00:13:20.310 Latency(us) 00:13:20.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.310 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:20.310 Nvme1n1 : 1.00 192372.28 751.45 0.00 0.00 662.70 273.07 892.02 00:13:20.310 =================================================================================================================== 00:13:20.310 Total : 192372.28 751.45 0.00 0.00 662.70 273.07 892.02 00:13:20.310 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 847728 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 847730 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 847733 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:20.568 rmmod nvme_tcp 00:13:20.568 rmmod nvme_fabrics 00:13:20.568 rmmod nvme_keyring 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 847564 ']' 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 847564 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 847564 ']' 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 847564 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 847564 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 847564' 00:13:20.568 killing process with pid 847564 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 847564 00:13:20.568 [2024-05-15 00:28:46.641860] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:20.568 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 847564 00:13:20.828 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.828 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:20.828 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:20.828 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.828 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.828 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.828 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.828 00:28:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.366 00:28:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:23.366 00:13:23.366 real 0m8.437s 00:13:23.366 user 0m19.965s 00:13:23.366 sys 0m3.981s 00:13:23.366 00:28:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:23.366 00:28:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.366 ************************************ 00:13:23.366 END TEST nvmf_bdev_io_wait 00:13:23.366 ************************************ 00:13:23.366 00:28:48 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:23.366 00:28:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:23.366 00:28:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:23.366 00:28:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:23.366 ************************************ 00:13:23.366 START TEST nvmf_queue_depth 00:13:23.366 ************************************ 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:23.366 * Looking for test storage... 00:13:23.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:23.366 00:28:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:25.934 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:25.934 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:25.934 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:25.934 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:25.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:13:25.934 00:13:25.934 --- 10.0.0.2 ping statistics --- 00:13:25.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.934 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:13:25.934 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:13:25.934 00:13:25.934 --- 10.0.0.1 ping statistics --- 00:13:25.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.934 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=850247 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 850247 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 850247 ']' 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:25.935 00:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:25.935 [2024-05-15 00:28:51.688778] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:25.935 [2024-05-15 00:28:51.688870] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.935 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.935 [2024-05-15 00:28:51.765444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.935 [2024-05-15 00:28:51.875457] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.935 [2024-05-15 00:28:51.875514] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.935 [2024-05-15 00:28:51.875542] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.935 [2024-05-15 00:28:51.875554] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.935 [2024-05-15 00:28:51.875563] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.935 [2024-05-15 00:28:51.875589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:26.865 [2024-05-15 00:28:52.711474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:26.865 Malloc0 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:26.865 [2024-05-15 00:28:52.771341] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:26.865 [2024-05-15 00:28:52.771633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=850399 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 850399 /var/tmp/bdevperf.sock 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 850399 ']' 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:26.865 00:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:26.865 [2024-05-15 00:28:52.816431] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:26.865 [2024-05-15 00:28:52.816510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850399 ] 00:13:26.865 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.865 [2024-05-15 00:28:52.888604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.865 [2024-05-15 00:28:53.004788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.122 00:28:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:27.122 00:28:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:13:27.122 00:28:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:27.122 00:28:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:27.122 00:28:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:27.122 NVMe0n1 00:13:27.122 00:28:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:27.122 00:28:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:27.380 Running I/O for 10 seconds... 00:13:37.349 00:13:37.349 Latency(us) 00:13:37.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.349 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:37.349 Verification LBA range: start 0x0 length 0x4000 00:13:37.349 NVMe0n1 : 10.09 8471.45 33.09 0.00 0.00 120249.08 25826.04 78837.38 00:13:37.349 =================================================================================================================== 00:13:37.349 Total : 8471.45 33.09 0.00 0.00 120249.08 25826.04 78837.38 00:13:37.349 0 00:13:37.349 00:29:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 850399 00:13:37.349 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 850399 ']' 00:13:37.349 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 850399 00:13:37.349 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:13:37.349 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:37.349 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 850399 00:13:37.606 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:37.607 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:37.607 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 850399' 00:13:37.607 killing process with pid 850399 00:13:37.607 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 850399 00:13:37.607 Received shutdown signal, test time was about 10.000000 seconds 00:13:37.607 00:13:37.607 Latency(us) 00:13:37.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.607 =================================================================================================================== 00:13:37.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:37.607 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 850399 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.864 rmmod nvme_tcp 00:13:37.864 rmmod nvme_fabrics 00:13:37.864 rmmod nvme_keyring 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 850247 ']' 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 850247 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 850247 ']' 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 850247 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 850247 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 850247' 00:13:37.864 killing process with pid 850247 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 850247 00:13:37.864 [2024-05-15 00:29:03.903526] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:37.864 00:29:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 850247 00:13:38.122 00:29:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:38.122 00:29:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:38.122 00:29:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:38.122 00:29:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.122 00:29:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:38.122 00:29:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.122 00:29:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.122 00:29:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.661 00:29:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.661 00:13:40.661 real 0m17.237s 00:13:40.661 user 0m23.772s 00:13:40.661 sys 0m3.385s 00:13:40.661 00:29:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:40.661 00:29:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:40.661 ************************************ 00:13:40.661 END TEST nvmf_queue_depth 00:13:40.661 ************************************ 00:13:40.661 00:29:06 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:40.661 00:29:06 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:40.661 00:29:06 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:40.661 00:29:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:40.661 ************************************ 00:13:40.661 START TEST nvmf_target_multipath 00:13:40.661 ************************************ 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:40.661 * Looking for test storage... 00:13:40.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.661 00:29:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.195 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:43.196 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:43.196 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:43.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:43.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:13:43.196 00:13:43.196 --- 10.0.0.2 ping statistics --- 00:13:43.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.196 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:13:43.196 00:13:43.196 --- 10.0.0.1 ping statistics --- 00:13:43.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.196 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:43.196 only one NIC for nvmf test 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.196 rmmod nvme_tcp 00:13:43.196 rmmod nvme_fabrics 00:13:43.196 rmmod nvme_keyring 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.196 00:29:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.100 00:13:45.100 real 0m4.723s 00:13:45.100 user 0m0.959s 00:13:45.100 sys 0m1.777s 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:45.100 00:29:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:45.100 ************************************ 00:13:45.100 END TEST nvmf_target_multipath 00:13:45.100 ************************************ 00:13:45.100 00:29:11 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:45.100 00:29:11 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:45.100 00:29:11 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:45.100 00:29:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:45.100 ************************************ 00:13:45.100 START TEST nvmf_zcopy 00:13:45.100 ************************************ 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:45.100 * Looking for test storage... 00:13:45.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.100 00:29:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:45.101 00:29:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:47.632 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:47.632 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:47.632 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:47.632 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:13:47.632 00:13:47.632 --- 10.0.0.2 ping statistics --- 00:13:47.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.632 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:13:47.632 00:13:47.632 --- 10.0.0.1 ping statistics --- 00:13:47.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.632 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=856155 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 856155 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 856155 ']' 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:47.632 00:29:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:47.632 [2024-05-15 00:29:13.735106] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:47.632 [2024-05-15 00:29:13.735178] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.632 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.890 [2024-05-15 00:29:13.813706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.890 [2024-05-15 00:29:13.922902] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.890 [2024-05-15 00:29:13.922982] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.890 [2024-05-15 00:29:13.922996] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.890 [2024-05-15 00:29:13.923007] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.890 [2024-05-15 00:29:13.923017] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.890 [2024-05-15 00:29:13.923045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.890 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:47.890 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:13:47.890 00:29:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.890 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:47.890 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:48.149 [2024-05-15 00:29:14.070008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:48.149 [2024-05-15 00:29:14.085953] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:48.149 [2024-05-15 00:29:14.086253] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:48.149 malloc0 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:48.149 { 00:13:48.149 "params": { 00:13:48.149 "name": "Nvme$subsystem", 00:13:48.149 "trtype": "$TEST_TRANSPORT", 00:13:48.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:48.149 "adrfam": "ipv4", 00:13:48.149 "trsvcid": "$NVMF_PORT", 00:13:48.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:48.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:48.149 "hdgst": ${hdgst:-false}, 00:13:48.149 "ddgst": ${ddgst:-false} 00:13:48.149 }, 00:13:48.149 "method": "bdev_nvme_attach_controller" 00:13:48.149 } 00:13:48.149 EOF 00:13:48.149 )") 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:48.149 00:29:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:48.149 "params": { 00:13:48.149 "name": "Nvme1", 00:13:48.149 "trtype": "tcp", 00:13:48.149 "traddr": "10.0.0.2", 00:13:48.149 "adrfam": "ipv4", 00:13:48.149 "trsvcid": "4420", 00:13:48.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:48.149 "hdgst": false, 00:13:48.149 "ddgst": false 00:13:48.149 }, 00:13:48.149 "method": "bdev_nvme_attach_controller" 00:13:48.149 }' 00:13:48.149 [2024-05-15 00:29:14.164414] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:48.149 [2024-05-15 00:29:14.164509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856303 ] 00:13:48.149 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.149 [2024-05-15 00:29:14.237435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.408 [2024-05-15 00:29:14.358000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.666 Running I/O for 10 seconds... 00:13:58.655 00:13:58.655 Latency(us) 00:13:58.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.655 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:58.655 Verification LBA range: start 0x0 length 0x1000 00:13:58.655 Nvme1n1 : 10.02 4691.68 36.65 0.00 0.00 27212.56 1589.85 41748.86 00:13:58.655 =================================================================================================================== 00:13:58.655 Total : 4691.68 36.65 0.00 0.00 27212.56 1589.85 41748.86 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=857498 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:58.912 { 00:13:58.912 "params": { 00:13:58.912 "name": "Nvme$subsystem", 00:13:58.912 "trtype": "$TEST_TRANSPORT", 00:13:58.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:58.912 "adrfam": "ipv4", 00:13:58.912 "trsvcid": "$NVMF_PORT", 00:13:58.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:58.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:58.912 "hdgst": ${hdgst:-false}, 00:13:58.912 "ddgst": ${ddgst:-false} 00:13:58.912 }, 00:13:58.912 "method": "bdev_nvme_attach_controller" 00:13:58.912 } 00:13:58.912 EOF 00:13:58.912 )") 00:13:58.912 [2024-05-15 00:29:25.024997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:58.912 [2024-05-15 00:29:25.025039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:58.912 00:29:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:58.912 "params": { 00:13:58.912 "name": "Nvme1", 00:13:58.912 "trtype": "tcp", 00:13:58.912 "traddr": "10.0.0.2", 00:13:58.912 "adrfam": "ipv4", 00:13:58.912 "trsvcid": "4420", 00:13:58.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:58.912 "hdgst": false, 00:13:58.912 "ddgst": false 00:13:58.912 }, 00:13:58.912 "method": "bdev_nvme_attach_controller" 00:13:58.912 }' 00:13:58.912 [2024-05-15 00:29:25.032941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.912 [2024-05-15 00:29:25.032975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.912 [2024-05-15 00:29:25.040986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.912 [2024-05-15 00:29:25.041009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.912 [2024-05-15 00:29:25.048978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.912 [2024-05-15 00:29:25.048999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.912 [2024-05-15 00:29:25.056999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.912 [2024-05-15 00:29:25.057021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.912 [2024-05-15 00:29:25.060799] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:58.912 [2024-05-15 00:29:25.060861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid857498 ] 00:13:58.912 [2024-05-15 00:29:25.065034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.912 [2024-05-15 00:29:25.065055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.912 [2024-05-15 00:29:25.073055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.912 [2024-05-15 00:29:25.073093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.170 [2024-05-15 00:29:25.081063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.170 [2024-05-15 00:29:25.081084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.170 [2024-05-15 00:29:25.089088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.170 [2024-05-15 00:29:25.089109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.170 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.170 [2024-05-15 00:29:25.097108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.170 [2024-05-15 00:29:25.097129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.170 [2024-05-15 00:29:25.105134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.170 [2024-05-15 00:29:25.105156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.170 [2024-05-15 00:29:25.113155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.170 [2024-05-15 00:29:25.113177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.170 [2024-05-15 00:29:25.121177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.170 [2024-05-15 00:29:25.121199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.129194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.129229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.132691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.171 [2024-05-15 00:29:25.137243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.137267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.145299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.145334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.153273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.153294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.161306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.161326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.169327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.169348] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.177348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.177369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.185376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.185397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.193412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.193439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.201473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.201532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.209440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.209462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.217463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.217484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.225485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.225506] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.233521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.233542] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.241527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.241548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.245409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.171 [2024-05-15 00:29:25.249565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.249585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.257569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.257590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.265632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.265665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.273638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.273672] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.281661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.281696] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.289695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.289731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.297710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.297745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.305721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.305752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.313723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.313745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.321771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.321807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.171 [2024-05-15 00:29:25.329794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.171 [2024-05-15 00:29:25.329829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.337790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.337825] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.345812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.345838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.353846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.353871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.361858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.361881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.369881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.369902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.377903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.377947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.385949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.385987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.393976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.394001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.402007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.402031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.410025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.410047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.418044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.418070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.426053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.426076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 Running I/O for 5 seconds... 00:13:59.429 [2024-05-15 00:29:25.440305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.440337] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.460898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.460949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.482360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.482393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.504193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.504236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.526019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.526047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.546943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.546974] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.570556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.570587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.429 [2024-05-15 00:29:25.592116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.429 [2024-05-15 00:29:25.592144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.614650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.614681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.637069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.637098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.659395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.659427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.680522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.680554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.699244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.699276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.720844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.720876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.742663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.742694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.760804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.760836] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.782368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.782400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.805545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.805577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.828603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.828635] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.687 [2024-05-15 00:29:25.851398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.687 [2024-05-15 00:29:25.851430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.946 [2024-05-15 00:29:25.875180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.946 [2024-05-15 00:29:25.875223] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.946 [2024-05-15 00:29:25.897200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.946 [2024-05-15 00:29:25.897244] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.946 [2024-05-15 00:29:25.919500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.946 [2024-05-15 00:29:25.919533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.946 [2024-05-15 00:29:25.942068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.946 [2024-05-15 00:29:25.942097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.946 [2024-05-15 00:29:25.963751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.946 [2024-05-15 00:29:25.963782] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.946 [2024-05-15 00:29:25.985016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.946 [2024-05-15 00:29:25.985046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.946 [2024-05-15 00:29:26.002246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.946 [2024-05-15 00:29:26.002274] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.946 [2024-05-15 00:29:26.021298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.946 [2024-05-15 00:29:26.021332] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.946 [2024-05-15 00:29:26.043081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.946 [2024-05-15 00:29:26.043118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.946 [2024-05-15 00:29:26.060282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.946 [2024-05-15 00:29:26.060315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.946 [2024-05-15 00:29:26.080461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.946 [2024-05-15 00:29:26.080492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:59.947 [2024-05-15 00:29:26.104060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:59.947 [2024-05-15 00:29:26.104090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.125743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.125775] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.147339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.147372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.168772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.168804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.191200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.191243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.214852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.214884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.237985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.238014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.261666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.261698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.285048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.285077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.306900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.306939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.328683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.328716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.349994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.350023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.204 [2024-05-15 00:29:26.369059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.204 [2024-05-15 00:29:26.369089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.462 [2024-05-15 00:29:26.390956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.462 [2024-05-15 00:29:26.391000] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.462 [2024-05-15 00:29:26.407944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.462 [2024-05-15 00:29:26.408004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.462 [2024-05-15 00:29:26.428577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.462 [2024-05-15 00:29:26.428620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.462 [2024-05-15 00:29:26.452137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.462 [2024-05-15 00:29:26.452175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.462 [2024-05-15 00:29:26.474793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.462 [2024-05-15 00:29:26.474825] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.462 [2024-05-15 00:29:26.497937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.462 [2024-05-15 00:29:26.497984] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.462 [2024-05-15 00:29:26.521458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.462 [2024-05-15 00:29:26.521489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.462 [2024-05-15 00:29:26.542446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.462 [2024-05-15 00:29:26.542478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.462 [2024-05-15 00:29:26.565732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.462 [2024-05-15 00:29:26.565763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.462 [2024-05-15 00:29:26.587453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.462 [2024-05-15 00:29:26.587485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.462 [2024-05-15 00:29:26.608792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.462 [2024-05-15 00:29:26.608824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.627100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.627129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.650115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.650143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.670505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.670538] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.692424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.692456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.715257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.715289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.735987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.736016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.759699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.759730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.783115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.783144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.804700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.804732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.827639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.827682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.849525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.849557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.719 [2024-05-15 00:29:26.872883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.719 [2024-05-15 00:29:26.872915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:26.894980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:26.895009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:26.916195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:26.916240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:26.937293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:26.937326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:26.959602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:26.959634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:26.980069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:26.980108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:27.001590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:27.001621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:27.023688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:27.023719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:27.045176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:27.045205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:27.066675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:27.066707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:27.084855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:27.084887] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:27.106516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:27.106548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.977 [2024-05-15 00:29:27.122407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.977 [2024-05-15 00:29:27.122439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.144073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.144102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.165921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.165976] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.188258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.188290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.210991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.211021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.232326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.232371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.253627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.253658] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.271640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.271673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.294265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.294297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.316028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.316056] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.337683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.337715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.358135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.358164] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.375199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.375244] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.234 [2024-05-15 00:29:27.397711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.234 [2024-05-15 00:29:27.397742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.492 [2024-05-15 00:29:27.420730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.492 [2024-05-15 00:29:27.420762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.492 [2024-05-15 00:29:27.442937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.492 [2024-05-15 00:29:27.442969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.492 [2024-05-15 00:29:27.465464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.492 [2024-05-15 00:29:27.465506] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.492 [2024-05-15 00:29:27.486675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.492 [2024-05-15 00:29:27.486708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.492 [2024-05-15 00:29:27.509100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.492 [2024-05-15 00:29:27.509129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.492 [2024-05-15 00:29:27.532460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.492 [2024-05-15 00:29:27.532492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.492 [2024-05-15 00:29:27.557669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.492 [2024-05-15 00:29:27.557704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.492 [2024-05-15 00:29:27.579531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.492 [2024-05-15 00:29:27.579563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.492 [2024-05-15 00:29:27.602611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.492 [2024-05-15 00:29:27.602644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.492 [2024-05-15 00:29:27.623731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.492 [2024-05-15 00:29:27.623762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.492 [2024-05-15 00:29:27.641707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.492 [2024-05-15 00:29:27.641756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.665098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.665126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.686644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.686675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.710044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.710073] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.735777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.735808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.757045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.757074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.774868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.774900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.796819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.796851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.812925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.812979] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.834010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.834039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.855188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.855234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.877751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.877783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.749 [2024-05-15 00:29:27.898757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.749 [2024-05-15 00:29:27.898789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:27.921133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:27.921162] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:27.943738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:27.943769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:27.966526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:27.966558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:27.989491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:27.989522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:28.011769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:28.011800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:28.035168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:28.035197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:28.058631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:28.058676] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:28.079670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:28.079701] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:28.098467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:28.098500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:28.120812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:28.120844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:28.142675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:28.142707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.007 [2024-05-15 00:29:28.165568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.007 [2024-05-15 00:29:28.165600] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.189556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.189589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.212131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.212160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.234901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.234940] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.257503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.257534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.280128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.280157] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.301390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.301422] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.324454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.324486] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.343834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.343861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.364250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.364278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.382078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.382107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.403148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.403177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.265 [2024-05-15 00:29:28.423719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.265 [2024-05-15 00:29:28.423751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.523 [2024-05-15 00:29:28.446181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.523 [2024-05-15 00:29:28.446227] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.523 [2024-05-15 00:29:28.468530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.523 [2024-05-15 00:29:28.468562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.523 [2024-05-15 00:29:28.489603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.523 [2024-05-15 00:29:28.489635] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.523 [2024-05-15 00:29:28.511547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.523 [2024-05-15 00:29:28.511579] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.523 [2024-05-15 00:29:28.534449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.523 [2024-05-15 00:29:28.534481] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.523 [2024-05-15 00:29:28.556234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.523 [2024-05-15 00:29:28.556282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.523 [2024-05-15 00:29:28.578206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.523 [2024-05-15 00:29:28.578236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.523 [2024-05-15 00:29:28.598449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.523 [2024-05-15 00:29:28.598481] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.523 [2024-05-15 00:29:28.621053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.523 [2024-05-15 00:29:28.621082] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.523 [2024-05-15 00:29:28.642142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.524 [2024-05-15 00:29:28.642172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.524 [2024-05-15 00:29:28.664758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.524 [2024-05-15 00:29:28.664789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.524 [2024-05-15 00:29:28.686990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.524 [2024-05-15 00:29:28.687020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.781 [2024-05-15 00:29:28.709688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.781 [2024-05-15 00:29:28.709719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.781 [2024-05-15 00:29:28.734214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.781 [2024-05-15 00:29:28.734261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.781 [2024-05-15 00:29:28.751004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.781 [2024-05-15 00:29:28.751033] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.781 [2024-05-15 00:29:28.772244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.781 [2024-05-15 00:29:28.772289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.781 [2024-05-15 00:29:28.793454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.781 [2024-05-15 00:29:28.793486] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.781 [2024-05-15 00:29:28.815761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.781 [2024-05-15 00:29:28.815792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.781 [2024-05-15 00:29:28.839345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.781 [2024-05-15 00:29:28.839378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.781 [2024-05-15 00:29:28.863162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.781 [2024-05-15 00:29:28.863191] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.781 [2024-05-15 00:29:28.885895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.781 [2024-05-15 00:29:28.885926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.781 [2024-05-15 00:29:28.906639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.781 [2024-05-15 00:29:28.906671] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.781 [2024-05-15 00:29:28.929241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.781 [2024-05-15 00:29:28.929269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:28.954310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:28.954343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:28.976373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:28.976405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:28.998171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:28.998200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:29.020405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:29.020437] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:29.043789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:29.043821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:29.066852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:29.066883] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:29.089805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:29.089837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:29.111100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:29.111129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:29.132945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:29.132990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:29.156526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:29.156557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:29.180112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:29.180141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.039 [2024-05-15 00:29:29.202341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.039 [2024-05-15 00:29:29.202372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.296 [2024-05-15 00:29:29.226486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.296 [2024-05-15 00:29:29.226518] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.296 [2024-05-15 00:29:29.248401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.296 [2024-05-15 00:29:29.248432] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.296 [2024-05-15 00:29:29.271122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.296 [2024-05-15 00:29:29.271151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.296 [2024-05-15 00:29:29.292672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.297 [2024-05-15 00:29:29.292704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.297 [2024-05-15 00:29:29.311763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.297 [2024-05-15 00:29:29.311794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.297 [2024-05-15 00:29:29.333887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.297 [2024-05-15 00:29:29.333918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.297 [2024-05-15 00:29:29.356027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.297 [2024-05-15 00:29:29.356056] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.297 [2024-05-15 00:29:29.379166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.297 [2024-05-15 00:29:29.379195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.297 [2024-05-15 00:29:29.402987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.297 [2024-05-15 00:29:29.403016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.297 [2024-05-15 00:29:29.425647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.297 [2024-05-15 00:29:29.425679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.297 [2024-05-15 00:29:29.447249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.297 [2024-05-15 00:29:29.447276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.470522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.470553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.491720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.491752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.515060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.515090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.535379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.535411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.558139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.558168] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.579401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.579434] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.601192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.601238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.622736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.622768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.639957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.640002] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.660638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.660671] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.683047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.683086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.554 [2024-05-15 00:29:29.704370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.554 [2024-05-15 00:29:29.704413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.727507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.727539] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.750235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.750264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.771719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.771751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.792907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.792949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.816139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.816169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.838650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.838682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.862205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.862248] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.884150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.884179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.907743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.907774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.929941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.929987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.953317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.953350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.812 [2024-05-15 00:29:29.975775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.812 [2024-05-15 00:29:29.975806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.069 [2024-05-15 00:29:29.998429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.069 [2024-05-15 00:29:29.998461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.069 [2024-05-15 00:29:30.020597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.069 [2024-05-15 00:29:30.020643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.069 [2024-05-15 00:29:30.041635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.069 [2024-05-15 00:29:30.041682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.069 [2024-05-15 00:29:30.064071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.069 [2024-05-15 00:29:30.064101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.069 [2024-05-15 00:29:30.085756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.069 [2024-05-15 00:29:30.085787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.069 [2024-05-15 00:29:30.106423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.069 [2024-05-15 00:29:30.106455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.069 [2024-05-15 00:29:30.128740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.069 [2024-05-15 00:29:30.128789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.069 [2024-05-15 00:29:30.150579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.069 [2024-05-15 00:29:30.150610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.069 [2024-05-15 00:29:30.172563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.069 [2024-05-15 00:29:30.172595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.069 [2024-05-15 00:29:30.195120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.069 [2024-05-15 00:29:30.195148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.069 [2024-05-15 00:29:30.217105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.069 [2024-05-15 00:29:30.217133] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.239206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.239249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.262365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.262397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.285609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.285641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.308590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.308622] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.330034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.330062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.348406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.348439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.370405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.370437] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.390016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.390044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.412786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.412818] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.434568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.434600] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.454795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.454826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.461884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.461915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 00:14:04.328 Latency(us) 00:14:04.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.328 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:04.328 Nvme1n1 : 5.02 5704.44 44.57 0.00 0.00 22382.31 7815.77 35146.71 00:14:04.328 =================================================================================================================== 00:14:04.328 Total : 5704.44 44.57 0.00 0.00 22382.31 7815.77 35146.71 00:14:04.328 [2024-05-15 00:29:30.469626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.469655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.477647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.477676] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.328 [2024-05-15 00:29:30.485658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.328 [2024-05-15 00:29:30.485682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.493741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.493785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.501755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.501800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.509770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.509815] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.517793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.517836] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.525812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.525856] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.533845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.533893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.541860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.541905] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.549881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.549925] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.557915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.557972] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.565958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.566006] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.573984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.574031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.582003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.582048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.590024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.590071] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.598048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.598097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.606053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.606100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.614035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.614074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.622046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.622068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.630063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.630084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.638087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.638109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.646113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.646135] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.654192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.654236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.662199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.662241] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.670192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.670238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.678196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.678238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.686239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.686264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.694260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.694285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.702267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.702290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.586 [2024-05-15 00:29:30.710340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.586 [2024-05-15 00:29:30.710384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.587 [2024-05-15 00:29:30.718356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.587 [2024-05-15 00:29:30.718400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.587 [2024-05-15 00:29:30.726350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.587 [2024-05-15 00:29:30.726379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.587 [2024-05-15 00:29:30.734364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.587 [2024-05-15 00:29:30.734389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.587 [2024-05-15 00:29:30.742387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.587 [2024-05-15 00:29:30.742412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (857498) - No such process 00:14:04.587 00:29:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 857498 00:14:04.587 00:29:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.587 00:29:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:04.587 00:29:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:04.844 00:29:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:04.844 00:29:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:04.844 00:29:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:04.844 00:29:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:04.844 delay0 00:14:04.844 00:29:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:04.844 00:29:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:04.844 00:29:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:04.844 00:29:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:04.844 00:29:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:04.844 00:29:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:04.844 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.844 [2024-05-15 00:29:30.865579] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:11.399 Initializing NVMe Controllers 00:14:11.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:11.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:11.399 Initialization complete. Launching workers. 00:14:11.399 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 88 00:14:11.399 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 375, failed to submit 33 00:14:11.399 success 177, unsuccess 198, failed 0 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:11.399 rmmod nvme_tcp 00:14:11.399 rmmod nvme_fabrics 00:14:11.399 rmmod nvme_keyring 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 856155 ']' 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 856155 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 856155 ']' 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 856155 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 856155 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 856155' 00:14:11.399 killing process with pid 856155 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 856155 00:14:11.399 [2024-05-15 00:29:37.228745] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 856155 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.399 00:29:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.929 00:29:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.929 00:14:13.929 real 0m28.474s 00:14:13.929 user 0m38.364s 00:14:13.929 sys 0m9.325s 00:14:13.929 00:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:13.930 00:29:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:13.930 ************************************ 00:14:13.930 END TEST nvmf_zcopy 00:14:13.930 ************************************ 00:14:13.930 00:29:39 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:13.930 00:29:39 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:13.930 00:29:39 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:13.930 00:29:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.930 ************************************ 00:14:13.930 START TEST nvmf_nmic 00:14:13.930 ************************************ 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:13.930 * Looking for test storage... 00:14:13.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.930 00:29:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.482 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.482 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:16.483 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:16.483 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:16.483 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:16.483 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:14:16.483 00:14:16.483 --- 10.0.0.2 ping statistics --- 00:14:16.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.483 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:14:16.483 00:14:16.483 --- 10.0.0.1 ping statistics --- 00:14:16.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.483 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.483 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=861221 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 861221 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 861221 ']' 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:16.484 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.484 [2024-05-15 00:29:42.388512] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:14:16.484 [2024-05-15 00:29:42.388581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.484 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.484 [2024-05-15 00:29:42.464376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.484 [2024-05-15 00:29:42.571913] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.484 [2024-05-15 00:29:42.571980] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.484 [2024-05-15 00:29:42.572019] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.484 [2024-05-15 00:29:42.572031] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.484 [2024-05-15 00:29:42.572041] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.484 [2024-05-15 00:29:42.572102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.484 [2024-05-15 00:29:42.572164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.484 [2024-05-15 00:29:42.572230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.484 [2024-05-15 00:29:42.572233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.742 [2024-05-15 00:29:42.735805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.742 Malloc0 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.742 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.742 [2024-05-15 00:29:42.789319] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:16.742 [2024-05-15 00:29:42.789606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:16.743 test case1: single bdev can't be used in multiple subsystems 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.743 [2024-05-15 00:29:42.813417] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:16.743 [2024-05-15 00:29:42.813445] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:16.743 [2024-05-15 00:29:42.813460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.743 request: 00:14:16.743 { 00:14:16.743 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:16.743 "namespace": { 00:14:16.743 "bdev_name": "Malloc0", 00:14:16.743 "no_auto_visible": false 00:14:16.743 }, 00:14:16.743 "method": "nvmf_subsystem_add_ns", 00:14:16.743 "req_id": 1 00:14:16.743 } 00:14:16.743 Got JSON-RPC error response 00:14:16.743 response: 00:14:16.743 { 00:14:16.743 "code": -32602, 00:14:16.743 "message": "Invalid parameters" 00:14:16.743 } 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:16.743 Adding namespace failed - expected result. 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:16.743 test case2: host connect to nvmf target in multiple paths 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:16.743 [2024-05-15 00:29:42.821517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.743 00:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:17.307 00:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:18.239 00:29:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.239 00:29:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:14:18.239 00:29:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.239 00:29:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:18.239 00:29:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:14:20.135 00:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:20.135 00:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:20.135 00:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.135 00:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:20.135 00:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.135 00:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:14:20.135 00:29:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:20.135 [global] 00:14:20.135 thread=1 00:14:20.135 invalidate=1 00:14:20.135 rw=write 00:14:20.135 time_based=1 00:14:20.135 runtime=1 00:14:20.135 ioengine=libaio 00:14:20.135 direct=1 00:14:20.135 bs=4096 00:14:20.135 iodepth=1 00:14:20.135 norandommap=0 00:14:20.135 numjobs=1 00:14:20.135 00:14:20.135 verify_dump=1 00:14:20.135 verify_backlog=512 00:14:20.135 verify_state_save=0 00:14:20.135 do_verify=1 00:14:20.135 verify=crc32c-intel 00:14:20.135 [job0] 00:14:20.135 filename=/dev/nvme0n1 00:14:20.135 Could not set queue depth (nvme0n1) 00:14:20.135 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:20.135 fio-3.35 00:14:20.135 Starting 1 thread 00:14:21.505 00:14:21.505 job0: (groupid=0, jobs=1): err= 0: pid=861800: Wed May 15 00:29:47 2024 00:14:21.505 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:14:21.505 slat (nsec): min=7818, max=32704, avg=21015.59, stdev=8878.88 00:14:21.505 clat (usec): min=40908, max=41046, avg=40972.78, stdev=35.18 00:14:21.505 lat (usec): min=40940, max=41078, avg=40993.80, stdev=32.96 00:14:21.505 clat percentiles (usec): 00:14:21.505 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:21.505 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:21.505 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:21.505 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:21.505 | 99.99th=[41157] 00:14:21.505 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:14:21.505 slat (nsec): min=6895, max=44431, avg=11974.17, stdev=6150.44 00:14:21.505 clat (usec): min=205, max=383, avg=233.74, stdev=20.18 00:14:21.505 lat (usec): min=213, max=403, avg=245.71, stdev=23.82 00:14:21.505 clat percentiles (usec): 00:14:21.505 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:14:21.505 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 235], 00:14:21.505 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 265], 00:14:21.505 | 99.00th=[ 302], 99.50th=[ 338], 99.90th=[ 383], 99.95th=[ 383], 00:14:21.505 | 99.99th=[ 383] 00:14:21.505 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:14:21.505 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:21.505 lat (usec) : 250=79.96%, 500=15.92% 00:14:21.505 lat (msec) : 50=4.12% 00:14:21.505 cpu : usr=0.49%, sys=0.78%, ctx=534, majf=0, minf=2 00:14:21.506 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:21.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.506 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.506 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:21.506 00:14:21.506 Run status group 0 (all jobs): 00:14:21.506 READ: bw=85.4KiB/s (87.5kB/s), 85.4KiB/s-85.4KiB/s (87.5kB/s-87.5kB/s), io=88.0KiB (90.1kB), run=1030-1030msec 00:14:21.506 WRITE: bw=1988KiB/s (2036kB/s), 1988KiB/s-1988KiB/s (2036kB/s-2036kB/s), io=2048KiB (2097kB), run=1030-1030msec 00:14:21.506 00:14:21.506 Disk stats (read/write): 00:14:21.506 nvme0n1: ios=68/512, merge=0/0, ticks=772/115, in_queue=887, util=92.48% 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.506 rmmod nvme_tcp 00:14:21.506 rmmod nvme_fabrics 00:14:21.506 rmmod nvme_keyring 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 861221 ']' 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 861221 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 861221 ']' 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 861221 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:21.506 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 861221 00:14:21.764 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:21.764 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:21.764 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 861221' 00:14:21.764 killing process with pid 861221 00:14:21.764 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 861221 00:14:21.764 [2024-05-15 00:29:47.684127] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:21.764 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 861221 00:14:22.023 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.023 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.023 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.023 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.023 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.023 00:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.023 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.023 00:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.926 00:29:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.926 00:14:23.926 real 0m10.424s 00:14:23.926 user 0m22.540s 00:14:23.926 sys 0m2.606s 00:14:23.926 00:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:23.926 00:29:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.926 ************************************ 00:14:23.926 END TEST nvmf_nmic 00:14:23.926 ************************************ 00:14:23.926 00:29:50 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:23.926 00:29:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:23.926 00:29:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:23.926 00:29:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.184 ************************************ 00:14:24.184 START TEST nvmf_fio_target 00:14:24.184 ************************************ 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:24.184 * Looking for test storage... 00:14:24.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.184 00:29:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:26.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:26.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:26.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.713 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:26.714 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.714 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:14:26.972 00:14:26.972 --- 10.0.0.2 ping statistics --- 00:14:26.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.972 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:14:26.972 00:14:26.972 --- 10.0.0.1 ping statistics --- 00:14:26.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.972 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=864286 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 864286 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 864286 ']' 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:26.972 00:29:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.972 [2024-05-15 00:29:53.008925] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:14:26.972 [2024-05-15 00:29:53.009039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.972 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.972 [2024-05-15 00:29:53.098840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.230 [2024-05-15 00:29:53.221064] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.230 [2024-05-15 00:29:53.221115] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.230 [2024-05-15 00:29:53.221132] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.230 [2024-05-15 00:29:53.221145] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.230 [2024-05-15 00:29:53.221158] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.230 [2024-05-15 00:29:53.221228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.230 [2024-05-15 00:29:53.221279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.230 [2024-05-15 00:29:53.221282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.230 [2024-05-15 00:29:53.221255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.230 00:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:27.230 00:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:14:27.230 00:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:27.230 00:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:27.230 00:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.230 00:29:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.230 00:29:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:27.486 [2024-05-15 00:29:53.579253] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.486 00:29:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:27.743 00:29:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:27.743 00:29:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:28.000 00:29:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:28.000 00:29:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:28.565 00:29:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:28.565 00:29:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:28.565 00:29:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:28.565 00:29:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:28.823 00:29:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:29.081 00:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:29.081 00:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:29.646 00:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:29.646 00:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:29.646 00:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:29.646 00:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:29.903 00:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:30.161 00:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:30.161 00:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:30.419 00:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:30.419 00:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:30.676 00:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.932 [2024-05-15 00:29:57.055642] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:30.932 [2024-05-15 00:29:57.055958] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.932 00:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:31.189 00:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:31.446 00:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:32.376 00:29:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:32.376 00:29:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:14:32.376 00:29:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.376 00:29:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:14:32.376 00:29:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:14:32.376 00:29:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:14:34.315 00:30:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:34.315 00:30:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:34.315 00:30:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.315 00:30:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:14:34.315 00:30:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.315 00:30:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:14:34.315 00:30:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:34.315 [global] 00:14:34.315 thread=1 00:14:34.315 invalidate=1 00:14:34.315 rw=write 00:14:34.315 time_based=1 00:14:34.315 runtime=1 00:14:34.315 ioengine=libaio 00:14:34.315 direct=1 00:14:34.315 bs=4096 00:14:34.315 iodepth=1 00:14:34.315 norandommap=0 00:14:34.315 numjobs=1 00:14:34.315 00:14:34.315 verify_dump=1 00:14:34.315 verify_backlog=512 00:14:34.315 verify_state_save=0 00:14:34.315 do_verify=1 00:14:34.315 verify=crc32c-intel 00:14:34.315 [job0] 00:14:34.315 filename=/dev/nvme0n1 00:14:34.315 [job1] 00:14:34.315 filename=/dev/nvme0n2 00:14:34.315 [job2] 00:14:34.315 filename=/dev/nvme0n3 00:14:34.315 [job3] 00:14:34.315 filename=/dev/nvme0n4 00:14:34.315 Could not set queue depth (nvme0n1) 00:14:34.315 Could not set queue depth (nvme0n2) 00:14:34.315 Could not set queue depth (nvme0n3) 00:14:34.315 Could not set queue depth (nvme0n4) 00:14:34.573 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:34.573 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:34.573 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:34.573 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:34.573 fio-3.35 00:14:34.573 Starting 4 threads 00:14:35.943 00:14:35.943 job0: (groupid=0, jobs=1): err= 0: pid=865303: Wed May 15 00:30:01 2024 00:14:35.943 read: IOPS=1235, BW=4943KiB/s (5062kB/s)(4948KiB/1001msec) 00:14:35.943 slat (nsec): min=5611, max=33186, avg=9357.04, stdev=4967.59 00:14:35.943 clat (usec): min=328, max=1597, avg=419.12, stdev=82.05 00:14:35.943 lat (usec): min=335, max=1603, avg=428.47, stdev=82.84 00:14:35.943 clat percentiles (usec): 00:14:35.943 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:14:35.943 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 408], 00:14:35.943 | 70.00th=[ 457], 80.00th=[ 490], 90.00th=[ 537], 95.00th=[ 578], 00:14:35.943 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 693], 99.95th=[ 1598], 00:14:35.943 | 99.99th=[ 1598] 00:14:35.943 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:35.943 slat (nsec): min=6894, max=73353, avg=14588.11, stdev=9031.60 00:14:35.943 clat (usec): min=203, max=1113, avg=283.43, stdev=65.51 00:14:35.943 lat (usec): min=211, max=1122, avg=298.02, stdev=69.85 00:14:35.943 clat percentiles (usec): 00:14:35.943 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 231], 00:14:35.943 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 289], 00:14:35.943 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 375], 95.00th=[ 412], 00:14:35.943 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 529], 99.95th=[ 1106], 00:14:35.944 | 99.99th=[ 1106] 00:14:35.944 bw ( KiB/s): min= 6752, max= 6752, per=42.20%, avg=6752.00, stdev= 0.00, samples=1 00:14:35.944 iops : min= 1688, max= 1688, avg=1688.00, stdev= 0.00, samples=1 00:14:35.944 lat (usec) : 250=23.22%, 500=68.99%, 750=7.72% 00:14:35.944 lat (msec) : 2=0.07% 00:14:35.944 cpu : usr=2.70%, sys=4.40%, ctx=2776, majf=0, minf=2 00:14:35.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.944 issued rwts: total=1237,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.944 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.944 job1: (groupid=0, jobs=1): err= 0: pid=865318: Wed May 15 00:30:01 2024 00:14:35.944 read: IOPS=18, BW=74.2KiB/s (76.0kB/s)(76.0KiB/1024msec) 00:14:35.944 slat (nsec): min=13358, max=40051, avg=15849.53, stdev=6000.21 00:14:35.944 clat (usec): min=40908, max=45023, avg=41413.24, stdev=1268.93 00:14:35.944 lat (usec): min=40922, max=45041, avg=41429.09, stdev=1269.57 00:14:35.944 clat percentiles (usec): 00:14:35.944 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:35.944 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:35.944 | 70.00th=[41157], 80.00th=[41157], 90.00th=[44827], 95.00th=[44827], 00:14:35.944 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:14:35.944 | 99.99th=[44827] 00:14:35.944 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:14:35.944 slat (nsec): min=7795, max=75509, avg=21100.55, stdev=12221.25 00:14:35.944 clat (usec): min=228, max=886, avg=435.35, stdev=131.58 00:14:35.944 lat (usec): min=237, max=926, avg=456.45, stdev=133.51 00:14:35.944 clat percentiles (usec): 00:14:35.944 | 1.00th=[ 235], 5.00th=[ 249], 10.00th=[ 265], 20.00th=[ 289], 00:14:35.944 | 30.00th=[ 338], 40.00th=[ 400], 50.00th=[ 433], 60.00th=[ 478], 00:14:35.944 | 70.00th=[ 519], 80.00th=[ 562], 90.00th=[ 603], 95.00th=[ 635], 00:14:35.944 | 99.00th=[ 734], 99.50th=[ 799], 99.90th=[ 889], 99.95th=[ 889], 00:14:35.944 | 99.99th=[ 889] 00:14:35.944 bw ( KiB/s): min= 4096, max= 4096, per=25.60%, avg=4096.00, stdev= 0.00, samples=1 00:14:35.944 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:35.944 lat (usec) : 250=5.65%, 500=57.06%, 750=32.77%, 1000=0.94% 00:14:35.944 lat (msec) : 50=3.58% 00:14:35.944 cpu : usr=1.08%, sys=0.98%, ctx=531, majf=0, minf=1 00:14:35.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.944 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.944 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.944 job2: (groupid=0, jobs=1): err= 0: pid=865357: Wed May 15 00:30:01 2024 00:14:35.944 read: IOPS=1296, BW=5187KiB/s (5311kB/s)(5192KiB/1001msec) 00:14:35.944 slat (nsec): min=4536, max=71086, avg=16481.49, stdev=9848.17 00:14:35.944 clat (usec): min=325, max=757, avg=429.06, stdev=52.71 00:14:35.944 lat (usec): min=331, max=791, avg=445.54, stdev=55.76 00:14:35.944 clat percentiles (usec): 00:14:35.944 | 1.00th=[ 351], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 388], 00:14:35.944 | 30.00th=[ 400], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 429], 00:14:35.944 | 70.00th=[ 445], 80.00th=[ 469], 90.00th=[ 510], 95.00th=[ 529], 00:14:35.944 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 652], 99.95th=[ 758], 00:14:35.944 | 99.99th=[ 758] 00:14:35.944 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:35.944 slat (nsec): min=5692, max=61695, avg=10907.18, stdev=6518.15 00:14:35.944 clat (usec): min=205, max=2984, avg=254.94, stdev=77.72 00:14:35.944 lat (usec): min=211, max=3007, avg=265.85, stdev=79.19 00:14:35.944 clat percentiles (usec): 00:14:35.944 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 233], 00:14:35.944 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:14:35.944 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 314], 00:14:35.944 | 99.00th=[ 420], 99.50th=[ 441], 99.90th=[ 494], 99.95th=[ 2999], 00:14:35.944 | 99.99th=[ 2999] 00:14:35.944 bw ( KiB/s): min= 8120, max= 8120, per=50.75%, avg=8120.00, stdev= 0.00, samples=1 00:14:35.944 iops : min= 2030, max= 2030, avg=2030.00, stdev= 0.00, samples=1 00:14:35.944 lat (usec) : 250=33.70%, 500=60.94%, 750=5.29%, 1000=0.04% 00:14:35.944 lat (msec) : 4=0.04% 00:14:35.944 cpu : usr=1.90%, sys=4.10%, ctx=2838, majf=0, minf=1 00:14:35.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.944 issued rwts: total=1298,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.944 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.944 job3: (groupid=0, jobs=1): err= 0: pid=865366: Wed May 15 00:30:01 2024 00:14:35.944 read: IOPS=19, BW=79.8KiB/s (81.7kB/s)(80.0KiB/1003msec) 00:14:35.944 slat (nsec): min=9242, max=32281, avg=15412.40, stdev=4277.28 00:14:35.944 clat (usec): min=40728, max=43237, avg=41885.72, stdev=503.83 00:14:35.944 lat (usec): min=40745, max=43251, avg=41901.13, stdev=501.99 00:14:35.944 clat percentiles (usec): 00:14:35.944 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:14:35.944 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:14:35.944 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:35.944 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:14:35.944 | 99.99th=[43254] 00:14:35.944 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:14:35.944 slat (nsec): min=7931, max=59379, avg=15975.95, stdev=8681.99 00:14:35.944 clat (usec): min=223, max=798, avg=301.32, stdev=49.89 00:14:35.944 lat (usec): min=231, max=831, avg=317.30, stdev=53.69 00:14:35.944 clat percentiles (usec): 00:14:35.944 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 262], 00:14:35.944 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 306], 00:14:35.944 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 392], 00:14:35.944 | 99.00th=[ 453], 99.50th=[ 461], 99.90th=[ 799], 99.95th=[ 799], 00:14:35.944 | 99.99th=[ 799] 00:14:35.944 bw ( KiB/s): min= 4096, max= 4096, per=25.60%, avg=4096.00, stdev= 0.00, samples=1 00:14:35.944 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:35.944 lat (usec) : 250=9.96%, 500=86.09%, 1000=0.19% 00:14:35.944 lat (msec) : 50=3.76% 00:14:35.944 cpu : usr=0.90%, sys=0.70%, ctx=532, majf=0, minf=1 00:14:35.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.944 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.944 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.944 00:14:35.944 Run status group 0 (all jobs): 00:14:35.944 READ: bw=9.82MiB/s (10.3MB/s), 74.2KiB/s-5187KiB/s (76.0kB/s-5311kB/s), io=10.1MiB (10.5MB), run=1001-1024msec 00:14:35.944 WRITE: bw=15.6MiB/s (16.4MB/s), 2000KiB/s-6138KiB/s (2048kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1024msec 00:14:35.944 00:14:35.944 Disk stats (read/write): 00:14:35.944 nvme0n1: ios=1074/1322, merge=0/0, ticks=465/363, in_queue=828, util=87.37% 00:14:35.944 nvme0n2: ios=64/512, merge=0/0, ticks=668/210, in_queue=878, util=91.24% 00:14:35.944 nvme0n3: ios=1081/1461, merge=0/0, ticks=479/359, in_queue=838, util=95.06% 00:14:35.944 nvme0n4: ios=73/512, merge=0/0, ticks=778/152, in_queue=930, util=96.39% 00:14:35.944 00:30:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:35.944 [global] 00:14:35.944 thread=1 00:14:35.944 invalidate=1 00:14:35.944 rw=randwrite 00:14:35.944 time_based=1 00:14:35.944 runtime=1 00:14:35.944 ioengine=libaio 00:14:35.944 direct=1 00:14:35.944 bs=4096 00:14:35.944 iodepth=1 00:14:35.944 norandommap=0 00:14:35.944 numjobs=1 00:14:35.944 00:14:35.944 verify_dump=1 00:14:35.944 verify_backlog=512 00:14:35.944 verify_state_save=0 00:14:35.944 do_verify=1 00:14:35.944 verify=crc32c-intel 00:14:35.944 [job0] 00:14:35.944 filename=/dev/nvme0n1 00:14:35.944 [job1] 00:14:35.944 filename=/dev/nvme0n2 00:14:35.944 [job2] 00:14:35.944 filename=/dev/nvme0n3 00:14:35.944 [job3] 00:14:35.944 filename=/dev/nvme0n4 00:14:35.944 Could not set queue depth (nvme0n1) 00:14:35.944 Could not set queue depth (nvme0n2) 00:14:35.944 Could not set queue depth (nvme0n3) 00:14:35.944 Could not set queue depth (nvme0n4) 00:14:35.944 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:35.944 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:35.944 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:35.944 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:35.944 fio-3.35 00:14:35.944 Starting 4 threads 00:14:37.321 00:14:37.321 job0: (groupid=0, jobs=1): err= 0: pid=865705: Wed May 15 00:30:03 2024 00:14:37.321 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:14:37.321 slat (nsec): min=6195, max=55479, avg=17817.78, stdev=7589.70 00:14:37.321 clat (usec): min=339, max=895, avg=525.49, stdev=64.98 00:14:37.321 lat (usec): min=352, max=915, avg=543.31, stdev=66.67 00:14:37.321 clat percentiles (usec): 00:14:37.321 | 1.00th=[ 416], 5.00th=[ 465], 10.00th=[ 469], 20.00th=[ 482], 00:14:37.321 | 30.00th=[ 490], 40.00th=[ 498], 50.00th=[ 510], 60.00th=[ 519], 00:14:37.321 | 70.00th=[ 537], 80.00th=[ 562], 90.00th=[ 611], 95.00th=[ 660], 00:14:37.321 | 99.00th=[ 742], 99.50th=[ 799], 99.90th=[ 824], 99.95th=[ 898], 00:14:37.321 | 99.99th=[ 898] 00:14:37.321 write: IOPS=1391, BW=5566KiB/s (5700kB/s)(5572KiB/1001msec); 0 zone resets 00:14:37.321 slat (nsec): min=6797, max=86081, avg=15296.71, stdev=12209.84 00:14:37.321 clat (usec): min=210, max=1204, avg=295.25, stdev=82.76 00:14:37.321 lat (usec): min=218, max=1217, avg=310.55, stdev=90.19 00:14:37.321 clat percentiles (usec): 00:14:37.321 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:14:37.321 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 293], 00:14:37.321 | 70.00th=[ 326], 80.00th=[ 367], 90.00th=[ 416], 95.00th=[ 445], 00:14:37.321 | 99.00th=[ 506], 99.50th=[ 611], 99.90th=[ 832], 99.95th=[ 1205], 00:14:37.321 | 99.99th=[ 1205] 00:14:37.321 bw ( KiB/s): min= 6464, max= 6464, per=38.55%, avg=6464.00, stdev= 0.00, samples=1 00:14:37.321 iops : min= 1616, max= 1616, avg=1616.00, stdev= 0.00, samples=1 00:14:37.321 lat (usec) : 250=26.02%, 500=49.40%, 750=24.08%, 1000=0.46% 00:14:37.321 lat (msec) : 2=0.04% 00:14:37.321 cpu : usr=1.80%, sys=5.00%, ctx=2418, majf=0, minf=2 00:14:37.321 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.321 issued rwts: total=1024,1393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.321 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.321 job1: (groupid=0, jobs=1): err= 0: pid=865706: Wed May 15 00:30:03 2024 00:14:37.321 read: IOPS=119, BW=478KiB/s (490kB/s)(480KiB/1004msec) 00:14:37.321 slat (nsec): min=5392, max=35226, avg=13430.87, stdev=4175.72 00:14:37.321 clat (usec): min=365, max=42020, avg=7172.00, stdev=15137.15 00:14:37.321 lat (usec): min=376, max=42056, avg=7185.43, stdev=15139.13 00:14:37.321 clat percentiles (usec): 00:14:37.321 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 408], 00:14:37.321 | 30.00th=[ 420], 40.00th=[ 449], 50.00th=[ 469], 60.00th=[ 482], 00:14:37.321 | 70.00th=[ 519], 80.00th=[ 578], 90.00th=[41157], 95.00th=[42206], 00:14:37.321 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:37.321 | 99.99th=[42206] 00:14:37.321 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:14:37.321 slat (nsec): min=6517, max=39831, avg=9507.28, stdev=4223.24 00:14:37.321 clat (usec): min=215, max=523, avg=262.48, stdev=39.88 00:14:37.321 lat (usec): min=223, max=540, avg=271.99, stdev=40.68 00:14:37.321 clat percentiles (usec): 00:14:37.321 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 237], 00:14:37.321 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 255], 00:14:37.321 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 314], 95.00th=[ 355], 00:14:37.321 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 523], 99.95th=[ 523], 00:14:37.321 | 99.99th=[ 523] 00:14:37.321 bw ( KiB/s): min= 4096, max= 4096, per=24.42%, avg=4096.00, stdev= 0.00, samples=1 00:14:37.321 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:37.321 lat (usec) : 250=41.14%, 500=51.90%, 750=3.64%, 1000=0.16% 00:14:37.321 lat (msec) : 50=3.16% 00:14:37.321 cpu : usr=0.30%, sys=0.60%, ctx=634, majf=0, minf=1 00:14:37.321 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.321 issued rwts: total=120,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.321 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.321 job2: (groupid=0, jobs=1): err= 0: pid=865707: Wed May 15 00:30:03 2024 00:14:37.321 read: IOPS=785, BW=3143KiB/s (3218kB/s)(3168KiB/1008msec) 00:14:37.321 slat (nsec): min=4988, max=64577, avg=18743.65, stdev=10904.17 00:14:37.321 clat (usec): min=339, max=41581, avg=883.03, stdev=4063.48 00:14:37.321 lat (usec): min=349, max=41595, avg=901.78, stdev=4063.41 00:14:37.321 clat percentiles (usec): 00:14:37.321 | 1.00th=[ 351], 5.00th=[ 371], 10.00th=[ 388], 20.00th=[ 412], 00:14:37.321 | 30.00th=[ 429], 40.00th=[ 445], 50.00th=[ 461], 60.00th=[ 482], 00:14:37.321 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 586], 00:14:37.321 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:14:37.321 | 99.99th=[41681] 00:14:37.321 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:14:37.321 slat (nsec): min=6429, max=31215, avg=10132.90, stdev=3809.03 00:14:37.321 clat (usec): min=204, max=887, avg=269.57, stdev=62.60 00:14:37.321 lat (usec): min=213, max=896, avg=279.70, stdev=63.56 00:14:37.321 clat percentiles (usec): 00:14:37.321 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:14:37.321 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 260], 00:14:37.321 | 70.00th=[ 273], 80.00th=[ 310], 90.00th=[ 338], 95.00th=[ 392], 00:14:37.321 | 99.00th=[ 482], 99.50th=[ 586], 99.90th=[ 766], 99.95th=[ 889], 00:14:37.321 | 99.99th=[ 889] 00:14:37.321 bw ( KiB/s): min= 4096, max= 4096, per=24.42%, avg=4096.00, stdev= 0.00, samples=2 00:14:37.321 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:14:37.321 lat (usec) : 250=29.41%, 500=56.22%, 750=13.77%, 1000=0.11% 00:14:37.321 lat (msec) : 10=0.06%, 50=0.44% 00:14:37.321 cpu : usr=1.59%, sys=2.28%, ctx=1817, majf=0, minf=1 00:14:37.321 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.321 issued rwts: total=792,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.321 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.321 job3: (groupid=0, jobs=1): err= 0: pid=865708: Wed May 15 00:30:03 2024 00:14:37.321 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:14:37.321 slat (nsec): min=5818, max=55800, avg=16361.46, stdev=7871.53 00:14:37.321 clat (usec): min=353, max=639, avg=434.87, stdev=45.60 00:14:37.321 lat (usec): min=360, max=657, avg=451.23, stdev=48.60 00:14:37.321 clat percentiles (usec): 00:14:37.321 | 1.00th=[ 363], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 400], 00:14:37.321 | 30.00th=[ 408], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 437], 00:14:37.321 | 70.00th=[ 457], 80.00th=[ 474], 90.00th=[ 494], 95.00th=[ 519], 00:14:37.321 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 611], 99.95th=[ 644], 00:14:37.321 | 99.99th=[ 644] 00:14:37.321 write: IOPS=1295, BW=5183KiB/s (5307kB/s)(5188KiB/1001msec); 0 zone resets 00:14:37.321 slat (nsec): min=6537, max=82706, avg=20174.00, stdev=11830.19 00:14:37.321 clat (usec): min=262, max=589, avg=385.81, stdev=44.98 00:14:37.321 lat (usec): min=303, max=630, avg=405.98, stdev=51.44 00:14:37.321 clat percentiles (usec): 00:14:37.321 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 343], 00:14:37.321 | 30.00th=[ 359], 40.00th=[ 375], 50.00th=[ 392], 60.00th=[ 396], 00:14:37.321 | 70.00th=[ 400], 80.00th=[ 424], 90.00th=[ 453], 95.00th=[ 465], 00:14:37.321 | 99.00th=[ 506], 99.50th=[ 510], 99.90th=[ 578], 99.95th=[ 586], 00:14:37.321 | 99.99th=[ 586] 00:14:37.321 bw ( KiB/s): min= 5488, max= 5488, per=32.73%, avg=5488.00, stdev= 0.00, samples=1 00:14:37.321 iops : min= 1372, max= 1372, avg=1372.00, stdev= 0.00, samples=1 00:14:37.321 lat (usec) : 500=95.52%, 750=4.48% 00:14:37.321 cpu : usr=2.30%, sys=4.30%, ctx=2322, majf=0, minf=1 00:14:37.321 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.321 issued rwts: total=1024,1297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.321 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.321 00:14:37.321 Run status group 0 (all jobs): 00:14:37.321 READ: bw=11.5MiB/s (12.0MB/s), 478KiB/s-4092KiB/s (490kB/s-4190kB/s), io=11.6MiB (12.1MB), run=1001-1008msec 00:14:37.321 WRITE: bw=16.4MiB/s (17.2MB/s), 2040KiB/s-5566KiB/s (2089kB/s-5700kB/s), io=16.5MiB (17.3MB), run=1001-1008msec 00:14:37.321 00:14:37.321 Disk stats (read/write): 00:14:37.321 nvme0n1: ios=1074/1046, merge=0/0, ticks=728/272, in_queue=1000, util=89.88% 00:14:37.321 nvme0n2: ios=165/512, merge=0/0, ticks=978/132, in_queue=1110, util=93.90% 00:14:37.321 nvme0n3: ios=753/1024, merge=0/0, ticks=1490/271, in_queue=1761, util=96.66% 00:14:37.321 nvme0n4: ios=1015/1024, merge=0/0, ticks=1333/367, in_queue=1700, util=98.11% 00:14:37.321 00:30:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:37.321 [global] 00:14:37.321 thread=1 00:14:37.321 invalidate=1 00:14:37.321 rw=write 00:14:37.321 time_based=1 00:14:37.321 runtime=1 00:14:37.321 ioengine=libaio 00:14:37.321 direct=1 00:14:37.321 bs=4096 00:14:37.321 iodepth=128 00:14:37.321 norandommap=0 00:14:37.321 numjobs=1 00:14:37.321 00:14:37.322 verify_dump=1 00:14:37.322 verify_backlog=512 00:14:37.322 verify_state_save=0 00:14:37.322 do_verify=1 00:14:37.322 verify=crc32c-intel 00:14:37.322 [job0] 00:14:37.322 filename=/dev/nvme0n1 00:14:37.322 [job1] 00:14:37.322 filename=/dev/nvme0n2 00:14:37.322 [job2] 00:14:37.322 filename=/dev/nvme0n3 00:14:37.322 [job3] 00:14:37.322 filename=/dev/nvme0n4 00:14:37.322 Could not set queue depth (nvme0n1) 00:14:37.322 Could not set queue depth (nvme0n2) 00:14:37.322 Could not set queue depth (nvme0n3) 00:14:37.322 Could not set queue depth (nvme0n4) 00:14:37.322 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:37.322 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:37.322 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:37.322 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:37.322 fio-3.35 00:14:37.322 Starting 4 threads 00:14:38.695 00:14:38.695 job0: (groupid=0, jobs=1): err= 0: pid=865938: Wed May 15 00:30:04 2024 00:14:38.695 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:14:38.695 slat (usec): min=2, max=22536, avg=128.90, stdev=998.87 00:14:38.695 clat (usec): min=6039, max=52010, avg=16403.32, stdev=7371.04 00:14:38.695 lat (usec): min=6066, max=52025, avg=16532.22, stdev=7452.42 00:14:38.695 clat percentiles (usec): 00:14:38.695 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[11338], 00:14:38.695 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13173], 60.00th=[14222], 00:14:38.695 | 70.00th=[17957], 80.00th=[20317], 90.00th=[25822], 95.00th=[30016], 00:14:38.695 | 99.00th=[41681], 99.50th=[41681], 99.90th=[44303], 99.95th=[47449], 00:14:38.695 | 99.99th=[52167] 00:14:38.695 write: IOPS=2774, BW=10.8MiB/s (11.4MB/s)(10.9MiB/1007msec); 0 zone resets 00:14:38.695 slat (usec): min=4, max=20449, avg=218.91, stdev=1246.10 00:14:38.695 clat (usec): min=3358, max=89675, avg=30515.49, stdev=20284.54 00:14:38.695 lat (usec): min=3365, max=89683, avg=30734.41, stdev=20415.53 00:14:38.695 clat percentiles (usec): 00:14:38.695 | 1.00th=[ 5080], 5.00th=[ 6783], 10.00th=[ 8586], 20.00th=[11731], 00:14:38.695 | 30.00th=[17433], 40.00th=[20055], 50.00th=[21627], 60.00th=[31065], 00:14:38.695 | 70.00th=[41157], 80.00th=[50594], 90.00th=[58459], 95.00th=[71828], 00:14:38.695 | 99.00th=[84411], 99.50th=[89654], 99.90th=[89654], 99.95th=[89654], 00:14:38.695 | 99.99th=[89654] 00:14:38.695 bw ( KiB/s): min= 8664, max=12672, per=17.73%, avg=10668.00, stdev=2834.08, samples=2 00:14:38.695 iops : min= 2166, max= 3168, avg=2667.00, stdev=708.52, samples=2 00:14:38.695 lat (msec) : 4=0.39%, 10=11.60%, 20=46.79%, 50=30.74%, 100=10.48% 00:14:38.695 cpu : usr=2.68%, sys=4.27%, ctx=287, majf=0, minf=1 00:14:38.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:38.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:38.695 issued rwts: total=2560,2794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:38.695 job1: (groupid=0, jobs=1): err= 0: pid=865939: Wed May 15 00:30:04 2024 00:14:38.695 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:14:38.695 slat (usec): min=3, max=10582, avg=101.03, stdev=660.83 00:14:38.695 clat (usec): min=5737, max=32422, avg=12483.78, stdev=3997.51 00:14:38.695 lat (usec): min=5744, max=32430, avg=12584.81, stdev=4050.96 00:14:38.695 clat percentiles (usec): 00:14:38.695 | 1.00th=[ 6980], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:14:38.695 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11076], 60.00th=[11469], 00:14:38.695 | 70.00th=[12387], 80.00th=[14615], 90.00th=[17695], 95.00th=[20579], 00:14:38.695 | 99.00th=[28181], 99.50th=[29492], 99.90th=[32375], 99.95th=[32375], 00:14:38.695 | 99.99th=[32375] 00:14:38.695 write: IOPS=4164, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1010msec); 0 zone resets 00:14:38.695 slat (usec): min=4, max=10103, avg=131.50, stdev=683.38 00:14:38.695 clat (usec): min=1489, max=88302, avg=18305.82, stdev=16239.46 00:14:38.695 lat (usec): min=1502, max=88310, avg=18437.32, stdev=16346.99 00:14:38.695 clat percentiles (usec): 00:14:38.695 | 1.00th=[ 4146], 5.00th=[ 5735], 10.00th=[ 7308], 20.00th=[ 9241], 00:14:38.695 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:14:38.695 | 70.00th=[17433], 80.00th=[22938], 90.00th=[42730], 95.00th=[52691], 00:14:38.695 | 99.00th=[82314], 99.50th=[83362], 99.90th=[88605], 99.95th=[88605], 00:14:38.695 | 99.99th=[88605] 00:14:38.695 bw ( KiB/s): min=11344, max=21424, per=27.23%, avg=16384.00, stdev=7127.64, samples=2 00:14:38.695 iops : min= 2836, max= 5356, avg=4096.00, stdev=1781.91, samples=2 00:14:38.695 lat (msec) : 2=0.02%, 4=0.40%, 10=19.13%, 20=63.25%, 50=14.31% 00:14:38.695 lat (msec) : 100=2.89% 00:14:38.695 cpu : usr=4.46%, sys=6.74%, ctx=481, majf=0, minf=1 00:14:38.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:38.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:38.695 issued rwts: total=4096,4206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:38.695 job2: (groupid=0, jobs=1): err= 0: pid=865940: Wed May 15 00:30:04 2024 00:14:38.696 read: IOPS=3309, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1002msec) 00:14:38.696 slat (usec): min=3, max=30449, avg=149.39, stdev=1040.88 00:14:38.696 clat (usec): min=1473, max=43908, avg=18845.61, stdev=6087.61 00:14:38.696 lat (usec): min=6427, max=49394, avg=18994.99, stdev=6133.67 00:14:38.696 clat percentiles (usec): 00:14:38.696 | 1.00th=[ 6783], 5.00th=[12256], 10.00th=[13435], 20.00th=[14615], 00:14:38.696 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17957], 60.00th=[18744], 00:14:38.696 | 70.00th=[20055], 80.00th=[21627], 90.00th=[23987], 95.00th=[27395], 00:14:38.696 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:14:38.696 | 99.99th=[43779] 00:14:38.696 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:14:38.696 slat (usec): min=4, max=22635, avg=133.11, stdev=876.63 00:14:38.696 clat (usec): min=8503, max=35122, avg=17920.98, stdev=4642.34 00:14:38.696 lat (usec): min=8530, max=35142, avg=18054.08, stdev=4679.15 00:14:38.696 clat percentiles (usec): 00:14:38.696 | 1.00th=[11076], 5.00th=[12518], 10.00th=[13042], 20.00th=[14484], 00:14:38.696 | 30.00th=[15139], 40.00th=[16188], 50.00th=[17171], 60.00th=[17695], 00:14:38.696 | 70.00th=[19268], 80.00th=[20579], 90.00th=[22938], 95.00th=[25822], 00:14:38.696 | 99.00th=[33424], 99.50th=[33424], 99.90th=[33424], 99.95th=[34341], 00:14:38.696 | 99.99th=[34866] 00:14:38.696 bw ( KiB/s): min=13384, max=15318, per=23.85%, avg=14351.00, stdev=1367.54, samples=2 00:14:38.696 iops : min= 3346, max= 3829, avg=3587.50, stdev=341.53, samples=2 00:14:38.696 lat (msec) : 2=0.01%, 10=1.33%, 20=70.64%, 50=28.01% 00:14:38.696 cpu : usr=3.40%, sys=6.39%, ctx=289, majf=0, minf=1 00:14:38.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:38.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:38.696 issued rwts: total=3316,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:38.696 job3: (groupid=0, jobs=1): err= 0: pid=865941: Wed May 15 00:30:04 2024 00:14:38.696 read: IOPS=4572, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1006msec) 00:14:38.696 slat (usec): min=3, max=20084, avg=112.04, stdev=874.99 00:14:38.696 clat (usec): min=2473, max=43691, avg=14377.11, stdev=4583.15 00:14:38.696 lat (usec): min=5683, max=43710, avg=14489.15, stdev=4644.49 00:14:38.696 clat percentiles (usec): 00:14:38.696 | 1.00th=[ 6128], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11207], 00:14:38.696 | 30.00th=[11600], 40.00th=[11731], 50.00th=[12125], 60.00th=[13960], 00:14:38.696 | 70.00th=[15270], 80.00th=[17957], 90.00th=[22152], 95.00th=[24511], 00:14:38.696 | 99.00th=[27395], 99.50th=[27395], 99.90th=[27395], 99.95th=[38011], 00:14:38.696 | 99.99th=[43779] 00:14:38.696 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:14:38.696 slat (usec): min=4, max=31327, avg=97.37, stdev=761.66 00:14:38.696 clat (usec): min=4007, max=55082, avg=12456.98, stdev=4945.17 00:14:38.696 lat (usec): min=4113, max=55111, avg=12554.36, stdev=5012.77 00:14:38.696 clat percentiles (usec): 00:14:38.696 | 1.00th=[ 4555], 5.00th=[ 5997], 10.00th=[ 7111], 20.00th=[ 8848], 00:14:38.696 | 30.00th=[10028], 40.00th=[11994], 50.00th=[12518], 60.00th=[12649], 00:14:38.696 | 70.00th=[12911], 80.00th=[13042], 90.00th=[20317], 95.00th=[23462], 00:14:38.696 | 99.00th=[30016], 99.50th=[30016], 99.90th=[30016], 99.95th=[31851], 00:14:38.696 | 99.99th=[55313] 00:14:38.696 bw ( KiB/s): min=16384, max=20480, per=30.64%, avg=18432.00, stdev=2896.31, samples=2 00:14:38.696 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:14:38.696 lat (msec) : 4=0.01%, 10=17.46%, 20=69.84%, 50=12.67%, 100=0.01% 00:14:38.696 cpu : usr=5.47%, sys=7.76%, ctx=503, majf=0, minf=1 00:14:38.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:38.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:38.696 issued rwts: total=4600,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:38.696 00:14:38.696 Run status group 0 (all jobs): 00:14:38.696 READ: bw=56.4MiB/s (59.1MB/s), 9.93MiB/s-17.9MiB/s (10.4MB/s-18.7MB/s), io=56.9MiB (59.7MB), run=1002-1010msec 00:14:38.696 WRITE: bw=58.8MiB/s (61.6MB/s), 10.8MiB/s-17.9MiB/s (11.4MB/s-18.8MB/s), io=59.3MiB (62.2MB), run=1002-1010msec 00:14:38.696 00:14:38.696 Disk stats (read/write): 00:14:38.696 nvme0n1: ios=2098/2383, merge=0/0, ticks=26518/61505, in_queue=88023, util=90.98% 00:14:38.696 nvme0n2: ios=3624/3631, merge=0/0, ticks=43466/61315, in_queue=104781, util=98.68% 00:14:38.696 nvme0n3: ios=2672/3072, merge=0/0, ticks=27190/25633, in_queue=52823, util=99.16% 00:14:38.696 nvme0n4: ios=3603/3943, merge=0/0, ticks=51889/48076, in_queue=99965, util=97.35% 00:14:38.696 00:30:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:38.696 [global] 00:14:38.696 thread=1 00:14:38.696 invalidate=1 00:14:38.696 rw=randwrite 00:14:38.696 time_based=1 00:14:38.696 runtime=1 00:14:38.696 ioengine=libaio 00:14:38.696 direct=1 00:14:38.696 bs=4096 00:14:38.696 iodepth=128 00:14:38.696 norandommap=0 00:14:38.696 numjobs=1 00:14:38.696 00:14:38.696 verify_dump=1 00:14:38.696 verify_backlog=512 00:14:38.696 verify_state_save=0 00:14:38.696 do_verify=1 00:14:38.696 verify=crc32c-intel 00:14:38.696 [job0] 00:14:38.696 filename=/dev/nvme0n1 00:14:38.696 [job1] 00:14:38.696 filename=/dev/nvme0n2 00:14:38.696 [job2] 00:14:38.696 filename=/dev/nvme0n3 00:14:38.696 [job3] 00:14:38.696 filename=/dev/nvme0n4 00:14:38.696 Could not set queue depth (nvme0n1) 00:14:38.696 Could not set queue depth (nvme0n2) 00:14:38.696 Could not set queue depth (nvme0n3) 00:14:38.696 Could not set queue depth (nvme0n4) 00:14:38.696 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:38.696 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:38.696 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:38.696 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:38.696 fio-3.35 00:14:38.696 Starting 4 threads 00:14:40.074 00:14:40.074 job0: (groupid=0, jobs=1): err= 0: pid=866167: Wed May 15 00:30:06 2024 00:14:40.074 read: IOPS=3481, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1007msec) 00:14:40.074 slat (usec): min=2, max=33775, avg=138.86, stdev=1311.97 00:14:40.074 clat (usec): min=1292, max=174536, avg=18729.83, stdev=19682.53 00:14:40.074 lat (usec): min=1296, max=174558, avg=18868.69, stdev=19828.66 00:14:40.074 clat percentiles (msec): 00:14:40.074 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 10], 20.00th=[ 12], 00:14:40.074 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 15], 00:14:40.074 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 27], 95.00th=[ 59], 00:14:40.074 | 99.00th=[ 107], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 176], 00:14:40.074 | 99.99th=[ 176] 00:14:40.074 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:14:40.074 slat (usec): min=3, max=32374, avg=105.15, stdev=817.69 00:14:40.074 clat (usec): min=983, max=180358, avg=17242.49, stdev=20322.85 00:14:40.074 lat (usec): min=991, max=180374, avg=17347.65, stdev=20391.94 00:14:40.074 clat percentiles (msec): 00:14:40.074 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:14:40.074 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:14:40.074 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 34], 95.00th=[ 43], 00:14:40.074 | 99.00th=[ 140], 99.50th=[ 155], 99.90th=[ 180], 99.95th=[ 180], 00:14:40.074 | 99.99th=[ 182] 00:14:40.074 bw ( KiB/s): min= 8192, max=20480, per=22.98%, avg=14336.00, stdev=8688.93, samples=2 00:14:40.074 iops : min= 2048, max= 5120, avg=3584.00, stdev=2172.23, samples=2 00:14:40.074 lat (usec) : 1000=0.11% 00:14:40.074 lat (msec) : 2=0.63%, 4=1.28%, 10=15.98%, 20=63.68%, 50=13.64% 00:14:40.074 lat (msec) : 100=3.19%, 250=1.48% 00:14:40.074 cpu : usr=3.08%, sys=4.27%, ctx=308, majf=0, minf=1 00:14:40.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:40.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.074 issued rwts: total=3506,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.074 job1: (groupid=0, jobs=1): err= 0: pid=866170: Wed May 15 00:30:06 2024 00:14:40.074 read: IOPS=3114, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1007msec) 00:14:40.074 slat (usec): min=2, max=45955, avg=151.35, stdev=1229.31 00:14:40.074 clat (usec): min=4992, max=71934, avg=19912.56, stdev=12815.78 00:14:40.074 lat (usec): min=6113, max=71937, avg=20063.91, stdev=12877.09 00:14:40.074 clat percentiles (usec): 00:14:40.074 | 1.00th=[ 6194], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[12649], 00:14:40.074 | 30.00th=[13829], 40.00th=[14353], 50.00th=[14877], 60.00th=[16909], 00:14:40.074 | 70.00th=[19006], 80.00th=[23987], 90.00th=[34341], 95.00th=[47973], 00:14:40.074 | 99.00th=[69731], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:14:40.074 | 99.99th=[71828] 00:14:40.074 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:14:40.074 slat (usec): min=3, max=38102, avg=137.74, stdev=1023.08 00:14:40.074 clat (usec): min=3538, max=54926, avg=17399.21, stdev=8453.23 00:14:40.074 lat (usec): min=3550, max=54932, avg=17536.95, stdev=8493.91 00:14:40.074 clat percentiles (usec): 00:14:40.074 | 1.00th=[ 4015], 5.00th=[ 7701], 10.00th=[10159], 20.00th=[12649], 00:14:40.074 | 30.00th=[13566], 40.00th=[14746], 50.00th=[15533], 60.00th=[16450], 00:14:40.074 | 70.00th=[17695], 80.00th=[20841], 90.00th=[26084], 95.00th=[34866], 00:14:40.074 | 99.00th=[53216], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:14:40.074 | 99.99th=[54789] 00:14:40.074 bw ( KiB/s): min=12288, max=15872, per=22.57%, avg=14080.00, stdev=2534.27, samples=2 00:14:40.074 iops : min= 3072, max= 3968, avg=3520.00, stdev=633.57, samples=2 00:14:40.074 lat (msec) : 4=0.49%, 10=7.29%, 20=67.69%, 50=21.53%, 100=2.99% 00:14:40.074 cpu : usr=4.08%, sys=5.37%, ctx=323, majf=0, minf=1 00:14:40.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:40.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.074 issued rwts: total=3136,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.074 job2: (groupid=0, jobs=1): err= 0: pid=866171: Wed May 15 00:30:06 2024 00:14:40.074 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:14:40.074 slat (usec): min=2, max=22662, avg=118.86, stdev=946.21 00:14:40.074 clat (usec): min=6686, max=83835, avg=16098.92, stdev=9928.79 00:14:40.074 lat (usec): min=6992, max=87162, avg=16217.78, stdev=10022.59 00:14:40.074 clat percentiles (usec): 00:14:40.074 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11207], 00:14:40.074 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[13566], 00:14:40.074 | 70.00th=[14222], 80.00th=[15139], 90.00th=[33424], 95.00th=[43779], 00:14:40.074 | 99.00th=[51643], 99.50th=[54264], 99.90th=[65799], 99.95th=[65799], 00:14:40.074 | 99.99th=[83362] 00:14:40.074 write: IOPS=4165, BW=16.3MiB/s (17.1MB/s)(16.3MiB/1004msec); 0 zone resets 00:14:40.074 slat (usec): min=3, max=15431, avg=112.83, stdev=768.23 00:14:40.074 clat (usec): min=3909, max=39760, avg=14567.20, stdev=5385.82 00:14:40.074 lat (usec): min=3926, max=42326, avg=14680.03, stdev=5438.88 00:14:40.074 clat percentiles (usec): 00:14:40.074 | 1.00th=[ 6587], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[11600], 00:14:40.074 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:14:40.074 | 70.00th=[14484], 80.00th=[17171], 90.00th=[23462], 95.00th=[24249], 00:14:40.074 | 99.00th=[30802], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:14:40.074 | 99.99th=[39584] 00:14:40.074 bw ( KiB/s): min=13048, max=19720, per=26.27%, avg=16384.00, stdev=4717.82, samples=2 00:14:40.074 iops : min= 3262, max= 4930, avg=4096.00, stdev=1179.45, samples=2 00:14:40.074 lat (msec) : 4=0.05%, 10=8.27%, 20=77.80%, 50=13.07%, 100=0.81% 00:14:40.074 cpu : usr=5.68%, sys=7.48%, ctx=280, majf=0, minf=1 00:14:40.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:40.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.074 issued rwts: total=4096,4182,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.074 job3: (groupid=0, jobs=1): err= 0: pid=866172: Wed May 15 00:30:06 2024 00:14:40.074 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:14:40.074 slat (usec): min=3, max=11945, avg=113.75, stdev=801.28 00:14:40.074 clat (usec): min=5347, max=32950, avg=14551.65, stdev=3401.68 00:14:40.074 lat (usec): min=5361, max=32963, avg=14665.40, stdev=3450.48 00:14:40.074 clat percentiles (usec): 00:14:40.074 | 1.00th=[ 8717], 5.00th=[10552], 10.00th=[11207], 20.00th=[12387], 00:14:40.074 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13566], 60.00th=[14222], 00:14:40.074 | 70.00th=[15533], 80.00th=[16909], 90.00th=[18744], 95.00th=[21627], 00:14:40.074 | 99.00th=[26608], 99.50th=[30278], 99.90th=[32900], 99.95th=[32900], 00:14:40.074 | 99.99th=[32900] 00:14:40.075 write: IOPS=4377, BW=17.1MiB/s (17.9MB/s)(17.3MiB/1012msec); 0 zone resets 00:14:40.075 slat (usec): min=4, max=14193, avg=110.90, stdev=710.52 00:14:40.075 clat (usec): min=1345, max=98597, avg=15292.12, stdev=13865.83 00:14:40.075 lat (usec): min=1382, max=98630, avg=15403.02, stdev=13943.85 00:14:40.075 clat percentiles (usec): 00:14:40.075 | 1.00th=[ 3523], 5.00th=[ 6128], 10.00th=[ 7570], 20.00th=[ 8848], 00:14:40.075 | 30.00th=[10028], 40.00th=[11076], 50.00th=[12911], 60.00th=[13829], 00:14:40.075 | 70.00th=[14877], 80.00th=[16712], 90.00th=[22414], 95.00th=[25297], 00:14:40.075 | 99.00th=[94897], 99.50th=[96994], 99.90th=[98042], 99.95th=[98042], 00:14:40.075 | 99.99th=[99091] 00:14:40.075 bw ( KiB/s): min=13944, max=20480, per=27.60%, avg=17212.00, stdev=4621.65, samples=2 00:14:40.075 iops : min= 3486, max= 5120, avg=4303.00, stdev=1155.41, samples=2 00:14:40.075 lat (msec) : 2=0.08%, 4=0.72%, 10=16.01%, 20=73.41%, 50=8.10% 00:14:40.075 lat (msec) : 100=1.68% 00:14:40.075 cpu : usr=4.95%, sys=8.41%, ctx=318, majf=0, minf=1 00:14:40.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:40.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.075 issued rwts: total=4096,4430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.075 00:14:40.075 Run status group 0 (all jobs): 00:14:40.075 READ: bw=57.3MiB/s (60.0MB/s), 12.2MiB/s-15.9MiB/s (12.8MB/s-16.7MB/s), io=57.9MiB (60.8MB), run=1004-1012msec 00:14:40.075 WRITE: bw=60.9MiB/s (63.9MB/s), 13.9MiB/s-17.1MiB/s (14.6MB/s-17.9MB/s), io=61.6MiB (64.6MB), run=1004-1012msec 00:14:40.075 00:14:40.075 Disk stats (read/write): 00:14:40.075 nvme0n1: ios=3147/3575, merge=0/0, ticks=37566/44533, in_queue=82099, util=89.70% 00:14:40.075 nvme0n2: ios=2610/2790, merge=0/0, ticks=21304/19482, in_queue=40786, util=91.68% 00:14:40.075 nvme0n3: ios=3304/3584, merge=0/0, ticks=25588/23459, in_queue=49047, util=95.62% 00:14:40.075 nvme0n4: ios=3359/3584, merge=0/0, ticks=42728/50870, in_queue=93598, util=93.91% 00:14:40.075 00:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:40.075 00:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=866308 00:14:40.075 00:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:40.075 00:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:40.075 [global] 00:14:40.075 thread=1 00:14:40.075 invalidate=1 00:14:40.075 rw=read 00:14:40.075 time_based=1 00:14:40.075 runtime=10 00:14:40.075 ioengine=libaio 00:14:40.075 direct=1 00:14:40.075 bs=4096 00:14:40.075 iodepth=1 00:14:40.075 norandommap=1 00:14:40.075 numjobs=1 00:14:40.075 00:14:40.075 [job0] 00:14:40.075 filename=/dev/nvme0n1 00:14:40.075 [job1] 00:14:40.075 filename=/dev/nvme0n2 00:14:40.075 [job2] 00:14:40.075 filename=/dev/nvme0n3 00:14:40.075 [job3] 00:14:40.075 filename=/dev/nvme0n4 00:14:40.075 Could not set queue depth (nvme0n1) 00:14:40.075 Could not set queue depth (nvme0n2) 00:14:40.075 Could not set queue depth (nvme0n3) 00:14:40.075 Could not set queue depth (nvme0n4) 00:14:40.333 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.333 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.333 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.333 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.333 fio-3.35 00:14:40.333 Starting 4 threads 00:14:43.615 00:30:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:43.615 00:30:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:43.615 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3923968, buflen=4096 00:14:43.615 fio: pid=866526, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:43.615 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=2326528, buflen=4096 00:14:43.615 fio: pid=866525, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:43.615 00:30:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:43.615 00:30:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:43.873 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=29294592, buflen=4096 00:14:43.873 fio: pid=866517, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:43.873 00:30:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:43.873 00:30:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:44.130 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=7233536, buflen=4096 00:14:44.130 fio: pid=866518, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:44.130 00:30:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:44.130 00:30:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:44.130 00:14:44.130 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=866517: Wed May 15 00:30:10 2024 00:14:44.130 read: IOPS=2087, BW=8350KiB/s (8551kB/s)(27.9MiB/3426msec) 00:14:44.130 slat (usec): min=4, max=12024, avg=17.37, stdev=248.23 00:14:44.130 clat (usec): min=315, max=1525, avg=458.68, stdev=115.16 00:14:44.130 lat (usec): min=321, max=12969, avg=476.05, stdev=280.31 00:14:44.130 clat percentiles (usec): 00:14:44.130 | 1.00th=[ 330], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 375], 00:14:44.130 | 30.00th=[ 388], 40.00th=[ 404], 50.00th=[ 433], 60.00th=[ 457], 00:14:44.130 | 70.00th=[ 498], 80.00th=[ 519], 90.00th=[ 553], 95.00th=[ 644], 00:14:44.130 | 99.00th=[ 971], 99.50th=[ 1012], 99.90th=[ 1172], 99.95th=[ 1319], 00:14:44.130 | 99.99th=[ 1532] 00:14:44.130 bw ( KiB/s): min= 7176, max= 9888, per=76.93%, avg=8637.33, stdev=1133.28, samples=6 00:14:44.130 iops : min= 1794, max= 2472, avg=2159.33, stdev=283.32, samples=6 00:14:44.130 lat (usec) : 500=70.22%, 750=26.65%, 1000=2.42% 00:14:44.130 lat (msec) : 2=0.70% 00:14:44.130 cpu : usr=1.37%, sys=3.45%, ctx=7157, majf=0, minf=1 00:14:44.130 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:44.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.130 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.130 issued rwts: total=7153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.130 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:44.130 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=866518: Wed May 15 00:30:10 2024 00:14:44.130 read: IOPS=474, BW=1898KiB/s (1944kB/s)(7064KiB/3721msec) 00:14:44.130 slat (usec): min=5, max=20825, avg=48.24, stdev=728.39 00:14:44.130 clat (usec): min=417, max=43000, avg=2055.97, stdev=7688.55 00:14:44.130 lat (usec): min=432, max=43020, avg=2104.22, stdev=7717.21 00:14:44.130 clat percentiles (usec): 00:14:44.130 | 1.00th=[ 437], 5.00th=[ 453], 10.00th=[ 465], 20.00th=[ 482], 00:14:44.130 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:14:44.130 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 627], 95.00th=[ 709], 00:14:44.130 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[43254], 00:14:44.130 | 99.99th=[43254] 00:14:44.130 bw ( KiB/s): min= 96, max= 6251, per=16.40%, avg=1841.57, stdev=2973.12, samples=7 00:14:44.130 iops : min= 24, max= 1562, avg=460.29, stdev=743.09, samples=7 00:14:44.130 lat (usec) : 500=24.79%, 750=70.57%, 1000=0.57% 00:14:44.130 lat (msec) : 2=0.23%, 20=0.06%, 50=3.74% 00:14:44.130 cpu : usr=0.16%, sys=0.75%, ctx=1775, majf=0, minf=1 00:14:44.130 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:44.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.130 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.130 issued rwts: total=1767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.130 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:44.130 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=866525: Wed May 15 00:30:10 2024 00:14:44.130 read: IOPS=180, BW=722KiB/s (739kB/s)(2272KiB/3147msec) 00:14:44.131 slat (usec): min=5, max=15536, avg=62.61, stdev=852.64 00:14:44.131 clat (usec): min=375, max=44996, avg=5473.60, stdev=13222.20 00:14:44.131 lat (usec): min=382, max=45016, avg=5536.30, stdev=13233.07 00:14:44.131 clat percentiles (usec): 00:14:44.131 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[ 515], 00:14:44.131 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 611], 00:14:44.131 | 70.00th=[ 668], 80.00th=[ 734], 90.00th=[41157], 95.00th=[41157], 00:14:44.131 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:14:44.131 | 99.99th=[44827] 00:14:44.131 bw ( KiB/s): min= 96, max= 1920, per=4.10%, avg=460.00, stdev=724.27, samples=6 00:14:44.131 iops : min= 24, max= 480, avg=115.00, stdev=181.07, samples=6 00:14:44.131 lat (usec) : 500=17.05%, 750=64.50%, 1000=4.22% 00:14:44.131 lat (msec) : 2=1.93%, 10=0.18%, 50=11.95% 00:14:44.131 cpu : usr=0.25%, sys=0.16%, ctx=571, majf=0, minf=1 00:14:44.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:44.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.131 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.131 issued rwts: total=569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:44.131 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=866526: Wed May 15 00:30:10 2024 00:14:44.131 read: IOPS=331, BW=1326KiB/s (1358kB/s)(3832KiB/2890msec) 00:14:44.131 slat (nsec): min=5602, max=42065, avg=9314.50, stdev=5763.01 00:14:44.131 clat (usec): min=346, max=41208, avg=3003.90, stdev=9904.47 00:14:44.131 lat (usec): min=352, max=41219, avg=3013.21, stdev=9906.95 00:14:44.131 clat percentiles (usec): 00:14:44.131 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 371], 00:14:44.131 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 429], 00:14:44.131 | 70.00th=[ 486], 80.00th=[ 494], 90.00th=[ 529], 95.00th=[41157], 00:14:44.131 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:44.131 | 99.99th=[41157] 00:14:44.131 bw ( KiB/s): min= 96, max= 5616, per=13.50%, avg=1516.80, stdev=2374.68, samples=5 00:14:44.131 iops : min= 24, max= 1404, avg=379.20, stdev=593.67, samples=5 00:14:44.131 lat (usec) : 500=82.59%, 750=10.74%, 1000=0.21% 00:14:44.131 lat (msec) : 50=6.36% 00:14:44.131 cpu : usr=0.17%, sys=0.48%, ctx=959, majf=0, minf=1 00:14:44.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:44.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.131 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.131 issued rwts: total=959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:44.131 00:14:44.131 Run status group 0 (all jobs): 00:14:44.131 READ: bw=11.0MiB/s (11.5MB/s), 722KiB/s-8350KiB/s (739kB/s-8551kB/s), io=40.8MiB (42.8MB), run=2890-3721msec 00:14:44.131 00:14:44.131 Disk stats (read/write): 00:14:44.131 nvme0n1: ios=7038/0, merge=0/0, ticks=3120/0, in_queue=3120, util=94.77% 00:14:44.131 nvme0n2: ios=1803/0, merge=0/0, ticks=4182/0, in_queue=4182, util=97.70% 00:14:44.131 nvme0n3: ios=497/0, merge=0/0, ticks=3062/0, in_queue=3062, util=95.88% 00:14:44.131 nvme0n4: ios=957/0, merge=0/0, ticks=2828/0, in_queue=2828, util=96.74% 00:14:44.389 00:30:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:44.389 00:30:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:44.646 00:30:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:44.646 00:30:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:44.904 00:30:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:44.905 00:30:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:45.163 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:45.163 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 866308 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:45.421 nvmf hotplug test: fio failed as expected 00:14:45.421 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.679 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:45.679 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:45.679 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:45.679 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:45.679 00:30:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:45.679 00:30:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.679 00:30:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:45.679 00:30:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.680 00:30:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:45.680 00:30:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.680 00:30:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.680 rmmod nvme_tcp 00:14:45.938 rmmod nvme_fabrics 00:14:45.938 rmmod nvme_keyring 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 864286 ']' 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 864286 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 864286 ']' 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 864286 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 864286 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 864286' 00:14:45.938 killing process with pid 864286 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 864286 00:14:45.938 [2024-05-15 00:30:11.922228] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:45.938 00:30:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 864286 00:14:46.197 00:30:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.197 00:30:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:46.197 00:30:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:46.197 00:30:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.197 00:30:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:46.197 00:30:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.197 00:30:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.197 00:30:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.101 00:30:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:48.101 00:14:48.101 real 0m24.149s 00:14:48.101 user 1m20.350s 00:14:48.101 sys 0m7.513s 00:14:48.101 00:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:48.101 00:30:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.101 ************************************ 00:14:48.101 END TEST nvmf_fio_target 00:14:48.101 ************************************ 00:14:48.101 00:30:14 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:48.359 00:30:14 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:48.359 00:30:14 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:48.359 00:30:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:48.359 ************************************ 00:14:48.359 START TEST nvmf_bdevio 00:14:48.359 ************************************ 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:48.359 * Looking for test storage... 00:14:48.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.359 00:30:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:48.360 00:30:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:50.927 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:50.927 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:50.927 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.927 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:50.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:50.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:14:50.928 00:14:50.928 --- 10.0.0.2 ping statistics --- 00:14:50.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.928 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:14:50.928 00:14:50.928 --- 10.0.0.1 ping statistics --- 00:14:50.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.928 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=869939 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 869939 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 869939 ']' 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:50.928 00:30:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.928 [2024-05-15 00:30:16.970907] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:14:50.928 [2024-05-15 00:30:16.971006] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.928 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.928 [2024-05-15 00:30:17.052184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.186 [2024-05-15 00:30:17.174141] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.187 [2024-05-15 00:30:17.174205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.187 [2024-05-15 00:30:17.174232] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.187 [2024-05-15 00:30:17.174245] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.187 [2024-05-15 00:30:17.174257] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.187 [2024-05-15 00:30:17.174361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:51.187 [2024-05-15 00:30:17.174416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:51.187 [2024-05-15 00:30:17.174470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:51.187 [2024-05-15 00:30:17.174473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.752 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:51.752 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:14:51.752 00:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.752 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:51.752 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.010 [2024-05-15 00:30:17.931539] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.010 Malloc0 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.010 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.011 [2024-05-15 00:30:17.984811] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:52.011 [2024-05-15 00:30:17.985136] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:52.011 { 00:14:52.011 "params": { 00:14:52.011 "name": "Nvme$subsystem", 00:14:52.011 "trtype": "$TEST_TRANSPORT", 00:14:52.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:52.011 "adrfam": "ipv4", 00:14:52.011 "trsvcid": "$NVMF_PORT", 00:14:52.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:52.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:52.011 "hdgst": ${hdgst:-false}, 00:14:52.011 "ddgst": ${ddgst:-false} 00:14:52.011 }, 00:14:52.011 "method": "bdev_nvme_attach_controller" 00:14:52.011 } 00:14:52.011 EOF 00:14:52.011 )") 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:52.011 00:30:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:52.011 "params": { 00:14:52.011 "name": "Nvme1", 00:14:52.011 "trtype": "tcp", 00:14:52.011 "traddr": "10.0.0.2", 00:14:52.011 "adrfam": "ipv4", 00:14:52.011 "trsvcid": "4420", 00:14:52.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:52.011 "hdgst": false, 00:14:52.011 "ddgst": false 00:14:52.011 }, 00:14:52.011 "method": "bdev_nvme_attach_controller" 00:14:52.011 }' 00:14:52.011 [2024-05-15 00:30:18.030292] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:14:52.011 [2024-05-15 00:30:18.030373] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870095 ] 00:14:52.011 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.011 [2024-05-15 00:30:18.102007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.268 [2024-05-15 00:30:18.218673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.268 [2024-05-15 00:30:18.218723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.268 [2024-05-15 00:30:18.218726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.525 I/O targets: 00:14:52.525 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:52.525 00:14:52.525 00:14:52.525 CUnit - A unit testing framework for C - Version 2.1-3 00:14:52.525 http://cunit.sourceforge.net/ 00:14:52.525 00:14:52.525 00:14:52.525 Suite: bdevio tests on: Nvme1n1 00:14:52.525 Test: blockdev write read block ...passed 00:14:52.525 Test: blockdev write zeroes read block ...passed 00:14:52.525 Test: blockdev write zeroes read no split ...passed 00:14:52.783 Test: blockdev write zeroes read split ...passed 00:14:52.783 Test: blockdev write zeroes read split partial ...passed 00:14:52.783 Test: blockdev reset ...[2024-05-15 00:30:18.777857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:52.783 [2024-05-15 00:30:18.777979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17589f0 (9): Bad file descriptor 00:14:52.783 [2024-05-15 00:30:18.928007] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:52.783 passed 00:14:52.783 Test: blockdev write read 8 blocks ...passed 00:14:52.783 Test: blockdev write read size > 128k ...passed 00:14:52.783 Test: blockdev write read invalid size ...passed 00:14:53.041 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:53.041 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:53.041 Test: blockdev write read max offset ...passed 00:14:53.041 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:53.041 Test: blockdev writev readv 8 blocks ...passed 00:14:53.041 Test: blockdev writev readv 30 x 1block ...passed 00:14:53.041 Test: blockdev writev readv block ...passed 00:14:53.041 Test: blockdev writev readv size > 128k ...passed 00:14:53.041 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:53.041 Test: blockdev comparev and writev ...[2024-05-15 00:30:19.104809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.041 [2024-05-15 00:30:19.104846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:53.041 [2024-05-15 00:30:19.104876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.041 [2024-05-15 00:30:19.104893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:53.041 [2024-05-15 00:30:19.105332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.041 [2024-05-15 00:30:19.105356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:53.041 [2024-05-15 00:30:19.105378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.041 [2024-05-15 00:30:19.105394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:53.041 [2024-05-15 00:30:19.105810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.041 [2024-05-15 00:30:19.105835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:53.041 [2024-05-15 00:30:19.105856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.041 [2024-05-15 00:30:19.105872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:53.042 [2024-05-15 00:30:19.106285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.042 [2024-05-15 00:30:19.106309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:53.042 [2024-05-15 00:30:19.106330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.042 [2024-05-15 00:30:19.106346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:53.042 passed 00:14:53.042 Test: blockdev nvme passthru rw ...passed 00:14:53.042 Test: blockdev nvme passthru vendor specific ...[2024-05-15 00:30:19.189336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.042 [2024-05-15 00:30:19.189363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:53.042 [2024-05-15 00:30:19.189585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.042 [2024-05-15 00:30:19.189614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:53.042 [2024-05-15 00:30:19.189845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.042 [2024-05-15 00:30:19.189870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:53.042 [2024-05-15 00:30:19.190094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.042 [2024-05-15 00:30:19.190119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:53.042 passed 00:14:53.300 Test: blockdev nvme admin passthru ...passed 00:14:53.300 Test: blockdev copy ...passed 00:14:53.300 00:14:53.300 Run Summary: Type Total Ran Passed Failed Inactive 00:14:53.300 suites 1 1 n/a 0 0 00:14:53.300 tests 23 23 23 0 0 00:14:53.300 asserts 152 152 152 0 n/a 00:14:53.300 00:14:53.300 Elapsed time = 1.386 seconds 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.558 rmmod nvme_tcp 00:14:53.558 rmmod nvme_fabrics 00:14:53.558 rmmod nvme_keyring 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 869939 ']' 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 869939 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 869939 ']' 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 869939 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 869939 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 869939' 00:14:53.558 killing process with pid 869939 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 869939 00:14:53.558 [2024-05-15 00:30:19.579148] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:53.558 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 869939 00:14:53.817 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.817 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.817 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.817 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.817 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.817 00:30:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.817 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.817 00:30:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.351 00:30:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.351 00:14:56.351 real 0m7.636s 00:14:56.351 user 0m14.507s 00:14:56.351 sys 0m2.428s 00:14:56.351 00:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:56.351 00:30:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:56.351 ************************************ 00:14:56.351 END TEST nvmf_bdevio 00:14:56.351 ************************************ 00:14:56.351 00:30:21 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:56.351 00:30:21 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:56.351 00:30:21 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:56.351 00:30:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.351 ************************************ 00:14:56.351 START TEST nvmf_auth_target 00:14:56.351 ************************************ 00:14:56.351 00:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:56.351 * Looking for test storage... 00:14:56.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:14:56.351 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.352 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.352 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.352 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.352 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.352 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.352 00:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.352 00:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.352 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:56.352 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:56.352 00:30:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:56.352 00:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:58.886 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:58.886 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:58.886 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:58.886 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.886 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:58.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:14:58.887 00:14:58.887 --- 10.0.0.2 ping statistics --- 00:14:58.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.887 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:14:58.887 00:14:58.887 --- 10.0.0.1 ping statistics --- 00:14:58.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.887 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=872578 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 872578 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 872578 ']' 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:58.887 00:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=872729 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4a87b417c7b84d776733da002afeee7a51f6aa8bbcb08742 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.D7q 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4a87b417c7b84d776733da002afeee7a51f6aa8bbcb08742 0 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4a87b417c7b84d776733da002afeee7a51f6aa8bbcb08742 0 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4a87b417c7b84d776733da002afeee7a51f6aa8bbcb08742 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.D7q 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.D7q 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.D7q 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7a7086542ed0c94d706c89cfb7401e7b 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.McA 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7a7086542ed0c94d706c89cfb7401e7b 1 00:14:59.821 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7a7086542ed0c94d706c89cfb7401e7b 1 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7a7086542ed0c94d706c89cfb7401e7b 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.McA 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.McA 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.McA 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3d1f729e08706dbe6b6d66ed3c2340ffefebb734afa7048a 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iLh 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3d1f729e08706dbe6b6d66ed3c2340ffefebb734afa7048a 2 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3d1f729e08706dbe6b6d66ed3c2340ffefebb734afa7048a 2 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3d1f729e08706dbe6b6d66ed3c2340ffefebb734afa7048a 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iLh 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iLh 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.iLh 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=40b0550a611e21674d85aa8aeec07398917a44785aaeac1d6a9b18ab6bf5ec8b 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.E99 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 40b0550a611e21674d85aa8aeec07398917a44785aaeac1d6a9b18ab6bf5ec8b 3 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 40b0550a611e21674d85aa8aeec07398917a44785aaeac1d6a9b18ab6bf5ec8b 3 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=40b0550a611e21674d85aa8aeec07398917a44785aaeac1d6a9b18ab6bf5ec8b 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.E99 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.E99 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.E99 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 872578 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 872578 ']' 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:59.822 00:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.080 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:00.080 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:15:00.080 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 872729 /var/tmp/host.sock 00:15:00.080 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 872729 ']' 00:15:00.080 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:15:00.080 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:00.080 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:00.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:00.080 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:00.080 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.D7q 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.D7q 00:15:00.338 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.D7q 00:15:00.596 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:00.596 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.McA 00:15:00.596 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:00.596 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.596 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:00.596 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.McA 00:15:00.596 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.McA 00:15:00.853 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:00.853 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iLh 00:15:00.853 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:00.853 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.853 00:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:00.853 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.iLh 00:15:00.853 00:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.iLh 00:15:01.109 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:01.109 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.E99 00:15:01.109 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:01.109 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.109 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:01.109 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.E99 00:15:01.109 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.E99 00:15:01.365 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:15:01.365 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.365 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:01.365 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:01.365 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:01.623 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:15:01.623 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:01.623 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:01.623 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:01.623 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:01.623 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:01.623 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:01.623 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.623 00:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:01.623 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:01.623 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:01.881 00:15:01.881 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:01.881 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:01.881 00:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.139 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.139 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.139 00:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:02.139 00:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.139 00:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:02.139 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:02.139 { 00:15:02.139 "cntlid": 1, 00:15:02.139 "qid": 0, 00:15:02.139 "state": "enabled", 00:15:02.139 "listen_address": { 00:15:02.139 "trtype": "TCP", 00:15:02.139 "adrfam": "IPv4", 00:15:02.139 "traddr": "10.0.0.2", 00:15:02.139 "trsvcid": "4420" 00:15:02.139 }, 00:15:02.139 "peer_address": { 00:15:02.139 "trtype": "TCP", 00:15:02.139 "adrfam": "IPv4", 00:15:02.139 "traddr": "10.0.0.1", 00:15:02.139 "trsvcid": "59606" 00:15:02.139 }, 00:15:02.139 "auth": { 00:15:02.139 "state": "completed", 00:15:02.139 "digest": "sha256", 00:15:02.139 "dhgroup": "null" 00:15:02.139 } 00:15:02.139 } 00:15:02.139 ]' 00:15:02.139 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:02.139 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.139 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:02.139 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:02.139 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:02.396 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.396 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.396 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.652 00:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:03.585 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:03.586 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.586 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.586 00:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:03.586 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:03.586 00:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:04.152 00:15:04.152 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:04.152 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:04.152 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.152 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.152 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.152 00:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.152 00:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.152 00:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.152 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:04.152 { 00:15:04.152 "cntlid": 3, 00:15:04.152 "qid": 0, 00:15:04.152 "state": "enabled", 00:15:04.152 "listen_address": { 00:15:04.152 "trtype": "TCP", 00:15:04.152 "adrfam": "IPv4", 00:15:04.152 "traddr": "10.0.0.2", 00:15:04.152 "trsvcid": "4420" 00:15:04.152 }, 00:15:04.152 "peer_address": { 00:15:04.152 "trtype": "TCP", 00:15:04.152 "adrfam": "IPv4", 00:15:04.152 "traddr": "10.0.0.1", 00:15:04.152 "trsvcid": "59622" 00:15:04.152 }, 00:15:04.152 "auth": { 00:15:04.152 "state": "completed", 00:15:04.152 "digest": "sha256", 00:15:04.152 "dhgroup": "null" 00:15:04.152 } 00:15:04.152 } 00:15:04.152 ]' 00:15:04.152 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:04.409 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.409 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:04.409 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:04.409 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:04.409 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.409 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.409 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.666 00:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:15:05.628 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.628 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.628 00:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.628 00:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.628 00:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.628 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:05.628 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.628 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.884 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:15:05.884 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:05.884 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:05.884 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:05.884 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:05.884 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:05.884 00:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.884 00:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.884 00:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.884 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:05.884 00:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:06.140 00:15:06.140 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:06.140 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:06.140 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.396 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.396 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.396 00:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.396 00:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.396 00:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.396 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:06.396 { 00:15:06.396 "cntlid": 5, 00:15:06.396 "qid": 0, 00:15:06.396 "state": "enabled", 00:15:06.396 "listen_address": { 00:15:06.396 "trtype": "TCP", 00:15:06.396 "adrfam": "IPv4", 00:15:06.396 "traddr": "10.0.0.2", 00:15:06.396 "trsvcid": "4420" 00:15:06.396 }, 00:15:06.396 "peer_address": { 00:15:06.396 "trtype": "TCP", 00:15:06.396 "adrfam": "IPv4", 00:15:06.396 "traddr": "10.0.0.1", 00:15:06.396 "trsvcid": "55248" 00:15:06.396 }, 00:15:06.396 "auth": { 00:15:06.396 "state": "completed", 00:15:06.396 "digest": "sha256", 00:15:06.396 "dhgroup": "null" 00:15:06.396 } 00:15:06.396 } 00:15:06.396 ]' 00:15:06.396 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:06.396 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.396 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:06.653 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:06.653 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:06.653 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.653 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.653 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.910 00:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:15:07.846 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.846 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.846 00:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.846 00:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.846 00:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.846 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:07.846 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.846 00:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:08.103 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:15:08.103 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:08.103 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:08.103 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:08.103 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:08.103 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:08.103 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:08.103 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.103 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:08.103 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.103 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.360 00:15:08.360 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:08.360 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:08.360 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:08.618 { 00:15:08.618 "cntlid": 7, 00:15:08.618 "qid": 0, 00:15:08.618 "state": "enabled", 00:15:08.618 "listen_address": { 00:15:08.618 "trtype": "TCP", 00:15:08.618 "adrfam": "IPv4", 00:15:08.618 "traddr": "10.0.0.2", 00:15:08.618 "trsvcid": "4420" 00:15:08.618 }, 00:15:08.618 "peer_address": { 00:15:08.618 "trtype": "TCP", 00:15:08.618 "adrfam": "IPv4", 00:15:08.618 "traddr": "10.0.0.1", 00:15:08.618 "trsvcid": "55284" 00:15:08.618 }, 00:15:08.618 "auth": { 00:15:08.618 "state": "completed", 00:15:08.618 "digest": "sha256", 00:15:08.618 "dhgroup": "null" 00:15:08.618 } 00:15:08.618 } 00:15:08.618 ]' 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.618 00:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.876 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:15:09.809 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.809 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.809 00:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.809 00:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.809 00:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.809 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.809 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:09.809 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:09.809 00:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.066 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:15:10.066 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:10.066 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:10.066 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:10.066 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:10.066 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:10.066 00:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:10.066 00:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.066 00:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:10.066 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:10.066 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:10.631 00:15:10.631 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:10.631 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:10.631 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:10.889 { 00:15:10.889 "cntlid": 9, 00:15:10.889 "qid": 0, 00:15:10.889 "state": "enabled", 00:15:10.889 "listen_address": { 00:15:10.889 "trtype": "TCP", 00:15:10.889 "adrfam": "IPv4", 00:15:10.889 "traddr": "10.0.0.2", 00:15:10.889 "trsvcid": "4420" 00:15:10.889 }, 00:15:10.889 "peer_address": { 00:15:10.889 "trtype": "TCP", 00:15:10.889 "adrfam": "IPv4", 00:15:10.889 "traddr": "10.0.0.1", 00:15:10.889 "trsvcid": "55300" 00:15:10.889 }, 00:15:10.889 "auth": { 00:15:10.889 "state": "completed", 00:15:10.889 "digest": "sha256", 00:15:10.889 "dhgroup": "ffdhe2048" 00:15:10.889 } 00:15:10.889 } 00:15:10.889 ]' 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.889 00:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.147 00:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:15:12.081 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.081 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.081 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:12.081 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.081 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:12.081 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:12.081 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.081 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.339 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:15:12.339 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:12.339 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:12.339 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:12.339 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:12.339 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:12.339 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:12.339 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.339 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:12.339 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:12.339 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:12.597 00:15:12.597 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:12.597 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.597 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:12.855 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.855 00:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.855 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:12.855 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.855 00:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:12.855 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:12.855 { 00:15:12.855 "cntlid": 11, 00:15:12.855 "qid": 0, 00:15:12.855 "state": "enabled", 00:15:12.855 "listen_address": { 00:15:12.855 "trtype": "TCP", 00:15:12.855 "adrfam": "IPv4", 00:15:12.855 "traddr": "10.0.0.2", 00:15:12.855 "trsvcid": "4420" 00:15:12.855 }, 00:15:12.855 "peer_address": { 00:15:12.855 "trtype": "TCP", 00:15:12.855 "adrfam": "IPv4", 00:15:12.855 "traddr": "10.0.0.1", 00:15:12.855 "trsvcid": "55328" 00:15:12.855 }, 00:15:12.855 "auth": { 00:15:12.855 "state": "completed", 00:15:12.855 "digest": "sha256", 00:15:12.855 "dhgroup": "ffdhe2048" 00:15:12.855 } 00:15:12.855 } 00:15:12.855 ]' 00:15:12.855 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:13.113 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.113 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:13.113 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.113 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:13.113 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.113 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.113 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.371 00:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:15:14.304 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.304 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.304 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.304 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.304 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.304 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:14.304 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:14.304 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:14.562 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:15:14.562 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:14.562 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:14.562 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:14.562 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:14.562 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:14.562 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:14.562 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.562 00:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:14.562 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:14.562 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:14.820 00:15:14.820 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:14.820 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:14.820 00:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.078 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.078 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.078 00:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.078 00:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.078 00:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.078 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:15.078 { 00:15:15.078 "cntlid": 13, 00:15:15.078 "qid": 0, 00:15:15.078 "state": "enabled", 00:15:15.078 "listen_address": { 00:15:15.078 "trtype": "TCP", 00:15:15.078 "adrfam": "IPv4", 00:15:15.078 "traddr": "10.0.0.2", 00:15:15.078 "trsvcid": "4420" 00:15:15.078 }, 00:15:15.078 "peer_address": { 00:15:15.078 "trtype": "TCP", 00:15:15.078 "adrfam": "IPv4", 00:15:15.078 "traddr": "10.0.0.1", 00:15:15.078 "trsvcid": "50396" 00:15:15.078 }, 00:15:15.078 "auth": { 00:15:15.078 "state": "completed", 00:15:15.078 "digest": "sha256", 00:15:15.078 "dhgroup": "ffdhe2048" 00:15:15.078 } 00:15:15.078 } 00:15:15.078 ]' 00:15:15.078 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:15.078 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.078 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:15.078 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.078 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:15.336 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.336 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.336 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.593 00:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:15:16.527 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.527 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.527 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.527 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.527 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.527 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:16.527 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:16.527 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:16.785 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:15:16.785 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:16.785 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:16.785 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:16.785 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:16.785 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:16.785 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.785 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.785 00:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.785 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.785 00:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:17.044 00:15:17.044 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:17.044 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:17.044 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.302 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.302 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.302 00:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.302 00:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.302 00:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.302 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:17.302 { 00:15:17.302 "cntlid": 15, 00:15:17.302 "qid": 0, 00:15:17.302 "state": "enabled", 00:15:17.302 "listen_address": { 00:15:17.302 "trtype": "TCP", 00:15:17.302 "adrfam": "IPv4", 00:15:17.302 "traddr": "10.0.0.2", 00:15:17.302 "trsvcid": "4420" 00:15:17.302 }, 00:15:17.302 "peer_address": { 00:15:17.302 "trtype": "TCP", 00:15:17.302 "adrfam": "IPv4", 00:15:17.302 "traddr": "10.0.0.1", 00:15:17.302 "trsvcid": "50426" 00:15:17.302 }, 00:15:17.302 "auth": { 00:15:17.302 "state": "completed", 00:15:17.302 "digest": "sha256", 00:15:17.302 "dhgroup": "ffdhe2048" 00:15:17.302 } 00:15:17.302 } 00:15:17.302 ]' 00:15:17.302 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:17.302 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.302 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:17.559 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:17.559 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:17.559 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.559 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.559 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.817 00:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:15:18.789 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.789 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.789 00:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:18.789 00:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.789 00:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:18.789 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.789 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:18.789 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.789 00:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.047 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:15:19.047 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:19.047 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:19.047 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:19.047 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:19.047 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:19.047 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:19.047 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.047 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:19.047 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:19.048 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:19.305 00:15:19.305 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:19.305 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:19.305 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.563 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.564 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.564 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:19.564 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.564 00:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:19.564 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:19.564 { 00:15:19.564 "cntlid": 17, 00:15:19.564 "qid": 0, 00:15:19.564 "state": "enabled", 00:15:19.564 "listen_address": { 00:15:19.564 "trtype": "TCP", 00:15:19.564 "adrfam": "IPv4", 00:15:19.564 "traddr": "10.0.0.2", 00:15:19.564 "trsvcid": "4420" 00:15:19.564 }, 00:15:19.564 "peer_address": { 00:15:19.564 "trtype": "TCP", 00:15:19.564 "adrfam": "IPv4", 00:15:19.564 "traddr": "10.0.0.1", 00:15:19.564 "trsvcid": "50450" 00:15:19.564 }, 00:15:19.564 "auth": { 00:15:19.564 "state": "completed", 00:15:19.564 "digest": "sha256", 00:15:19.564 "dhgroup": "ffdhe3072" 00:15:19.564 } 00:15:19.564 } 00:15:19.564 ]' 00:15:19.564 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:19.564 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.821 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:19.821 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:19.821 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:19.822 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.822 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.822 00:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.079 00:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:15:21.013 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.013 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.013 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.013 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.013 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.013 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:21.013 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:21.013 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:21.271 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:15:21.271 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:21.271 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:21.271 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:21.271 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:21.271 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:21.271 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.271 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.271 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.271 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:21.271 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:21.837 00:15:21.837 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:21.837 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:21.837 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.837 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.837 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.837 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.837 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.837 00:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.837 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:21.837 { 00:15:21.837 "cntlid": 19, 00:15:21.837 "qid": 0, 00:15:21.837 "state": "enabled", 00:15:21.837 "listen_address": { 00:15:21.837 "trtype": "TCP", 00:15:21.837 "adrfam": "IPv4", 00:15:21.837 "traddr": "10.0.0.2", 00:15:21.837 "trsvcid": "4420" 00:15:21.837 }, 00:15:21.837 "peer_address": { 00:15:21.837 "trtype": "TCP", 00:15:21.837 "adrfam": "IPv4", 00:15:21.837 "traddr": "10.0.0.1", 00:15:21.837 "trsvcid": "50466" 00:15:21.837 }, 00:15:21.837 "auth": { 00:15:21.837 "state": "completed", 00:15:21.837 "digest": "sha256", 00:15:21.837 "dhgroup": "ffdhe3072" 00:15:21.837 } 00:15:21.837 } 00:15:21.837 ]' 00:15:21.837 00:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:22.095 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.095 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:22.095 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.095 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:22.095 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.095 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.095 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.353 00:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:15:23.286 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.286 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.286 00:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:23.286 00:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.286 00:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:23.286 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:23.286 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.287 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.544 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:15:23.544 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:23.544 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:23.544 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:23.544 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:23.544 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:23.544 00:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:23.544 00:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.544 00:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:23.544 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:23.544 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:23.802 00:15:24.061 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:24.061 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:24.061 00:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.061 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.061 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.061 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.061 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.061 00:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.319 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:24.319 { 00:15:24.319 "cntlid": 21, 00:15:24.319 "qid": 0, 00:15:24.319 "state": "enabled", 00:15:24.319 "listen_address": { 00:15:24.319 "trtype": "TCP", 00:15:24.319 "adrfam": "IPv4", 00:15:24.319 "traddr": "10.0.0.2", 00:15:24.319 "trsvcid": "4420" 00:15:24.319 }, 00:15:24.319 "peer_address": { 00:15:24.319 "trtype": "TCP", 00:15:24.319 "adrfam": "IPv4", 00:15:24.319 "traddr": "10.0.0.1", 00:15:24.319 "trsvcid": "50492" 00:15:24.320 }, 00:15:24.320 "auth": { 00:15:24.320 "state": "completed", 00:15:24.320 "digest": "sha256", 00:15:24.320 "dhgroup": "ffdhe3072" 00:15:24.320 } 00:15:24.320 } 00:15:24.320 ]' 00:15:24.320 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:24.320 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.320 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:24.320 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.320 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:24.320 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.320 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.320 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.577 00:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:15:25.511 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.511 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.511 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.511 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.511 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.511 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:25.511 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:25.511 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:25.769 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:15:25.769 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:25.769 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:25.769 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:25.769 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:25.769 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:25.769 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.769 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.769 00:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.769 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.769 00:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.350 00:15:26.350 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:26.350 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:26.350 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.350 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.350 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.350 00:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:26.350 00:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.350 00:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:26.350 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:26.350 { 00:15:26.350 "cntlid": 23, 00:15:26.350 "qid": 0, 00:15:26.350 "state": "enabled", 00:15:26.350 "listen_address": { 00:15:26.350 "trtype": "TCP", 00:15:26.350 "adrfam": "IPv4", 00:15:26.350 "traddr": "10.0.0.2", 00:15:26.350 "trsvcid": "4420" 00:15:26.350 }, 00:15:26.350 "peer_address": { 00:15:26.350 "trtype": "TCP", 00:15:26.350 "adrfam": "IPv4", 00:15:26.350 "traddr": "10.0.0.1", 00:15:26.350 "trsvcid": "40528" 00:15:26.350 }, 00:15:26.350 "auth": { 00:15:26.350 "state": "completed", 00:15:26.350 "digest": "sha256", 00:15:26.350 "dhgroup": "ffdhe3072" 00:15:26.350 } 00:15:26.350 } 00:15:26.350 ]' 00:15:26.350 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:26.607 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.607 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:26.607 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:26.607 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:26.607 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.607 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.607 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.864 00:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:15:27.797 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.797 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.797 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.797 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.797 00:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.797 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.797 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:27.797 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:27.797 00:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.055 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:15:28.055 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:28.055 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:28.055 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:28.055 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:28.055 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:28.055 00:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:28.055 00:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.055 00:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:28.055 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:28.055 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:28.313 00:15:28.313 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:28.313 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:28.313 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.571 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.571 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.571 00:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:28.571 00:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.571 00:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:28.571 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:28.571 { 00:15:28.571 "cntlid": 25, 00:15:28.571 "qid": 0, 00:15:28.571 "state": "enabled", 00:15:28.571 "listen_address": { 00:15:28.571 "trtype": "TCP", 00:15:28.571 "adrfam": "IPv4", 00:15:28.571 "traddr": "10.0.0.2", 00:15:28.571 "trsvcid": "4420" 00:15:28.571 }, 00:15:28.571 "peer_address": { 00:15:28.571 "trtype": "TCP", 00:15:28.571 "adrfam": "IPv4", 00:15:28.571 "traddr": "10.0.0.1", 00:15:28.571 "trsvcid": "40552" 00:15:28.571 }, 00:15:28.571 "auth": { 00:15:28.571 "state": "completed", 00:15:28.571 "digest": "sha256", 00:15:28.571 "dhgroup": "ffdhe4096" 00:15:28.571 } 00:15:28.571 } 00:15:28.571 ]' 00:15:28.571 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:28.829 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.829 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:28.829 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.829 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:28.829 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.830 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.830 00:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.087 00:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:15:30.022 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.022 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.022 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:30.022 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.022 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:30.022 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:30.022 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:30.022 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:30.280 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:15:30.280 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:30.280 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:30.280 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:30.280 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:30.280 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:30.280 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:30.280 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.280 00:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:30.280 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:30.280 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:30.845 00:15:30.845 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:30.845 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:30.845 00:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:31.103 { 00:15:31.103 "cntlid": 27, 00:15:31.103 "qid": 0, 00:15:31.103 "state": "enabled", 00:15:31.103 "listen_address": { 00:15:31.103 "trtype": "TCP", 00:15:31.103 "adrfam": "IPv4", 00:15:31.103 "traddr": "10.0.0.2", 00:15:31.103 "trsvcid": "4420" 00:15:31.103 }, 00:15:31.103 "peer_address": { 00:15:31.103 "trtype": "TCP", 00:15:31.103 "adrfam": "IPv4", 00:15:31.103 "traddr": "10.0.0.1", 00:15:31.103 "trsvcid": "40578" 00:15:31.103 }, 00:15:31.103 "auth": { 00:15:31.103 "state": "completed", 00:15:31.103 "digest": "sha256", 00:15:31.103 "dhgroup": "ffdhe4096" 00:15:31.103 } 00:15:31.103 } 00:15:31.103 ]' 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.103 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.361 00:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:15:32.312 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.312 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.312 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:32.312 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.312 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:32.312 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:32.312 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:32.312 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:32.615 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:15:32.615 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:32.615 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:32.615 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:32.615 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:32.615 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:32.615 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:32.615 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.615 00:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:32.615 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:32.615 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:32.873 00:15:32.873 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:32.873 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:32.873 00:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.130 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.130 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.130 00:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.130 00:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.130 00:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.130 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:33.131 { 00:15:33.131 "cntlid": 29, 00:15:33.131 "qid": 0, 00:15:33.131 "state": "enabled", 00:15:33.131 "listen_address": { 00:15:33.131 "trtype": "TCP", 00:15:33.131 "adrfam": "IPv4", 00:15:33.131 "traddr": "10.0.0.2", 00:15:33.131 "trsvcid": "4420" 00:15:33.131 }, 00:15:33.131 "peer_address": { 00:15:33.131 "trtype": "TCP", 00:15:33.131 "adrfam": "IPv4", 00:15:33.131 "traddr": "10.0.0.1", 00:15:33.131 "trsvcid": "40608" 00:15:33.131 }, 00:15:33.131 "auth": { 00:15:33.131 "state": "completed", 00:15:33.131 "digest": "sha256", 00:15:33.131 "dhgroup": "ffdhe4096" 00:15:33.131 } 00:15:33.131 } 00:15:33.131 ]' 00:15:33.131 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:33.131 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.131 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:33.389 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:33.389 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:33.389 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.389 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.389 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.647 00:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:15:34.585 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.585 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.585 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.585 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.585 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.585 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:34.585 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:34.585 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:34.843 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:15:34.843 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:34.843 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:34.843 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:34.843 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:34.843 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:34.843 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.843 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.843 00:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.843 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.843 00:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:35.409 00:15:35.409 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:35.409 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:35.409 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.409 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.409 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.409 00:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:35.409 00:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.409 00:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:35.409 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:35.409 { 00:15:35.409 "cntlid": 31, 00:15:35.409 "qid": 0, 00:15:35.409 "state": "enabled", 00:15:35.409 "listen_address": { 00:15:35.409 "trtype": "TCP", 00:15:35.409 "adrfam": "IPv4", 00:15:35.409 "traddr": "10.0.0.2", 00:15:35.409 "trsvcid": "4420" 00:15:35.409 }, 00:15:35.409 "peer_address": { 00:15:35.409 "trtype": "TCP", 00:15:35.409 "adrfam": "IPv4", 00:15:35.409 "traddr": "10.0.0.1", 00:15:35.409 "trsvcid": "34186" 00:15:35.409 }, 00:15:35.409 "auth": { 00:15:35.409 "state": "completed", 00:15:35.409 "digest": "sha256", 00:15:35.409 "dhgroup": "ffdhe4096" 00:15:35.409 } 00:15:35.409 } 00:15:35.409 ]' 00:15:35.409 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:35.667 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.667 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:35.667 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:35.667 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:35.667 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.667 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.667 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.924 00:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:15:36.860 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.860 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.860 00:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:36.860 00:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.860 00:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:36.860 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.860 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:36.860 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:36.860 00:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:37.119 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:15:37.119 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:37.119 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:37.119 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:37.119 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:37.119 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:37.119 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.119 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.119 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.119 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:37.119 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:37.684 00:15:37.684 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:37.684 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:37.684 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.942 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.942 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.942 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.942 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.942 00:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.942 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:37.942 { 00:15:37.942 "cntlid": 33, 00:15:37.942 "qid": 0, 00:15:37.942 "state": "enabled", 00:15:37.942 "listen_address": { 00:15:37.942 "trtype": "TCP", 00:15:37.942 "adrfam": "IPv4", 00:15:37.942 "traddr": "10.0.0.2", 00:15:37.942 "trsvcid": "4420" 00:15:37.942 }, 00:15:37.942 "peer_address": { 00:15:37.942 "trtype": "TCP", 00:15:37.942 "adrfam": "IPv4", 00:15:37.942 "traddr": "10.0.0.1", 00:15:37.942 "trsvcid": "34204" 00:15:37.942 }, 00:15:37.942 "auth": { 00:15:37.942 "state": "completed", 00:15:37.942 "digest": "sha256", 00:15:37.942 "dhgroup": "ffdhe6144" 00:15:37.942 } 00:15:37.942 } 00:15:37.942 ]' 00:15:37.942 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:37.942 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.942 00:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:37.942 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.942 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:37.942 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.942 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.942 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.201 00:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:39.586 00:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:40.154 00:15:40.154 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:40.154 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:40.154 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.422 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.422 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.422 00:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:40.422 00:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.422 00:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:40.422 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:40.422 { 00:15:40.422 "cntlid": 35, 00:15:40.422 "qid": 0, 00:15:40.422 "state": "enabled", 00:15:40.422 "listen_address": { 00:15:40.422 "trtype": "TCP", 00:15:40.422 "adrfam": "IPv4", 00:15:40.422 "traddr": "10.0.0.2", 00:15:40.422 "trsvcid": "4420" 00:15:40.422 }, 00:15:40.422 "peer_address": { 00:15:40.422 "trtype": "TCP", 00:15:40.422 "adrfam": "IPv4", 00:15:40.422 "traddr": "10.0.0.1", 00:15:40.422 "trsvcid": "34228" 00:15:40.422 }, 00:15:40.422 "auth": { 00:15:40.422 "state": "completed", 00:15:40.422 "digest": "sha256", 00:15:40.422 "dhgroup": "ffdhe6144" 00:15:40.422 } 00:15:40.422 } 00:15:40.422 ]' 00:15:40.422 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:40.422 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.422 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:40.422 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.422 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:40.686 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.686 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.686 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.686 00:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:15:42.056 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.056 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.056 00:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:42.056 00:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.056 00:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:42.056 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:42.056 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:42.056 00:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:42.056 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:15:42.056 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:42.056 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:42.056 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:42.056 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:42.056 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:42.056 00:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:42.056 00:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.056 00:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:42.056 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:42.056 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:42.621 00:15:42.621 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:42.621 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:42.621 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.878 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.878 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.878 00:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:42.878 00:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.878 00:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:42.878 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:42.878 { 00:15:42.878 "cntlid": 37, 00:15:42.878 "qid": 0, 00:15:42.878 "state": "enabled", 00:15:42.878 "listen_address": { 00:15:42.878 "trtype": "TCP", 00:15:42.878 "adrfam": "IPv4", 00:15:42.878 "traddr": "10.0.0.2", 00:15:42.878 "trsvcid": "4420" 00:15:42.878 }, 00:15:42.878 "peer_address": { 00:15:42.878 "trtype": "TCP", 00:15:42.878 "adrfam": "IPv4", 00:15:42.878 "traddr": "10.0.0.1", 00:15:42.878 "trsvcid": "34258" 00:15:42.878 }, 00:15:42.878 "auth": { 00:15:42.878 "state": "completed", 00:15:42.878 "digest": "sha256", 00:15:42.878 "dhgroup": "ffdhe6144" 00:15:42.878 } 00:15:42.878 } 00:15:42.878 ]' 00:15:42.878 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:42.878 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.879 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:42.879 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.879 00:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:42.879 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.879 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.879 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.136 00:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:15:44.068 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.068 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.068 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:44.068 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.324 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:44.324 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:44.324 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:44.324 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:44.581 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:15:44.581 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:44.581 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:44.581 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:44.581 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:44.581 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:44.581 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:44.581 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.581 00:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:44.581 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:44.581 00:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.145 00:15:45.145 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:45.145 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:45.145 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.401 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.401 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.401 00:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.401 00:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.401 00:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.401 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:45.401 { 00:15:45.401 "cntlid": 39, 00:15:45.401 "qid": 0, 00:15:45.401 "state": "enabled", 00:15:45.401 "listen_address": { 00:15:45.401 "trtype": "TCP", 00:15:45.401 "adrfam": "IPv4", 00:15:45.401 "traddr": "10.0.0.2", 00:15:45.401 "trsvcid": "4420" 00:15:45.401 }, 00:15:45.401 "peer_address": { 00:15:45.402 "trtype": "TCP", 00:15:45.402 "adrfam": "IPv4", 00:15:45.402 "traddr": "10.0.0.1", 00:15:45.402 "trsvcid": "59940" 00:15:45.402 }, 00:15:45.402 "auth": { 00:15:45.402 "state": "completed", 00:15:45.402 "digest": "sha256", 00:15:45.402 "dhgroup": "ffdhe6144" 00:15:45.402 } 00:15:45.402 } 00:15:45.402 ]' 00:15:45.402 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:45.402 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.402 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:45.402 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:45.402 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:45.402 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.402 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.402 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.659 00:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:15:46.611 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.611 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.611 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:46.611 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.612 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:46.612 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.612 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:46.612 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.612 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.875 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:15:46.875 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:46.875 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:46.875 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.875 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:46.875 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:46.875 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:46.875 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.875 00:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:46.875 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:46.875 00:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:47.806 00:15:47.806 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:47.806 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:47.806 00:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.064 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.064 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.064 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:48.064 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.064 00:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:48.064 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:48.064 { 00:15:48.064 "cntlid": 41, 00:15:48.064 "qid": 0, 00:15:48.064 "state": "enabled", 00:15:48.064 "listen_address": { 00:15:48.064 "trtype": "TCP", 00:15:48.064 "adrfam": "IPv4", 00:15:48.064 "traddr": "10.0.0.2", 00:15:48.064 "trsvcid": "4420" 00:15:48.064 }, 00:15:48.064 "peer_address": { 00:15:48.064 "trtype": "TCP", 00:15:48.064 "adrfam": "IPv4", 00:15:48.064 "traddr": "10.0.0.1", 00:15:48.064 "trsvcid": "59984" 00:15:48.064 }, 00:15:48.064 "auth": { 00:15:48.064 "state": "completed", 00:15:48.064 "digest": "sha256", 00:15:48.064 "dhgroup": "ffdhe8192" 00:15:48.064 } 00:15:48.064 } 00:15:48.064 ]' 00:15:48.064 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:48.064 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.064 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:48.064 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.064 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:48.322 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.322 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.322 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.322 00:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:15:49.255 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.255 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.255 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.255 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.255 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.255 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:49.255 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:49.256 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:49.822 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:15:49.822 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:49.822 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:49.822 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:49.822 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:49.822 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:49.822 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.822 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.822 00:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.822 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:49.822 00:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:50.754 00:15:50.754 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:50.754 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:50.754 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.754 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.754 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.754 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.754 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.754 00:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.754 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:50.754 { 00:15:50.754 "cntlid": 43, 00:15:50.754 "qid": 0, 00:15:50.754 "state": "enabled", 00:15:50.754 "listen_address": { 00:15:50.754 "trtype": "TCP", 00:15:50.754 "adrfam": "IPv4", 00:15:50.754 "traddr": "10.0.0.2", 00:15:50.754 "trsvcid": "4420" 00:15:50.754 }, 00:15:50.754 "peer_address": { 00:15:50.754 "trtype": "TCP", 00:15:50.754 "adrfam": "IPv4", 00:15:50.754 "traddr": "10.0.0.1", 00:15:50.754 "trsvcid": "59998" 00:15:50.754 }, 00:15:50.754 "auth": { 00:15:50.754 "state": "completed", 00:15:50.755 "digest": "sha256", 00:15:50.755 "dhgroup": "ffdhe8192" 00:15:50.755 } 00:15:50.755 } 00:15:50.755 ]' 00:15:50.755 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:50.755 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.755 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:50.755 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:50.755 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:51.012 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.012 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.012 00:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.270 00:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:15:52.203 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.203 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.203 00:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.203 00:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.203 00:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.203 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:52.203 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.203 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.461 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:15:52.461 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:52.461 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:52.461 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:52.461 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:52.461 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:15:52.461 00:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.461 00:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.461 00:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.461 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:52.461 00:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:53.394 00:15:53.394 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:53.394 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:53.394 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:53.651 { 00:15:53.651 "cntlid": 45, 00:15:53.651 "qid": 0, 00:15:53.651 "state": "enabled", 00:15:53.651 "listen_address": { 00:15:53.651 "trtype": "TCP", 00:15:53.651 "adrfam": "IPv4", 00:15:53.651 "traddr": "10.0.0.2", 00:15:53.651 "trsvcid": "4420" 00:15:53.651 }, 00:15:53.651 "peer_address": { 00:15:53.651 "trtype": "TCP", 00:15:53.651 "adrfam": "IPv4", 00:15:53.651 "traddr": "10.0.0.1", 00:15:53.651 "trsvcid": "60034" 00:15:53.651 }, 00:15:53.651 "auth": { 00:15:53.651 "state": "completed", 00:15:53.651 "digest": "sha256", 00:15:53.651 "dhgroup": "ffdhe8192" 00:15:53.651 } 00:15:53.651 } 00:15:53.651 ]' 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.651 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.909 00:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:15:54.843 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.843 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.843 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.843 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.843 00:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.843 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:54.843 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:54.843 00:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:55.102 00:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:15:55.102 00:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:55.102 00:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:55.102 00:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:55.102 00:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:55.102 00:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:55.102 00:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.102 00:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.102 00:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.102 00:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.102 00:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.035 00:15:56.035 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:56.035 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.035 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:56.293 { 00:15:56.293 "cntlid": 47, 00:15:56.293 "qid": 0, 00:15:56.293 "state": "enabled", 00:15:56.293 "listen_address": { 00:15:56.293 "trtype": "TCP", 00:15:56.293 "adrfam": "IPv4", 00:15:56.293 "traddr": "10.0.0.2", 00:15:56.293 "trsvcid": "4420" 00:15:56.293 }, 00:15:56.293 "peer_address": { 00:15:56.293 "trtype": "TCP", 00:15:56.293 "adrfam": "IPv4", 00:15:56.293 "traddr": "10.0.0.1", 00:15:56.293 "trsvcid": "38816" 00:15:56.293 }, 00:15:56.293 "auth": { 00:15:56.293 "state": "completed", 00:15:56.293 "digest": "sha256", 00:15:56.293 "dhgroup": "ffdhe8192" 00:15:56.293 } 00:15:56.293 } 00:15:56.293 ]' 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.293 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.551 00:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:15:57.486 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.486 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.486 00:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:57.486 00:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:57.744 00:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.002 00:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.002 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:58.002 00:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:58.260 00:15:58.260 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:58.260 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:58.260 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:58.518 { 00:15:58.518 "cntlid": 49, 00:15:58.518 "qid": 0, 00:15:58.518 "state": "enabled", 00:15:58.518 "listen_address": { 00:15:58.518 "trtype": "TCP", 00:15:58.518 "adrfam": "IPv4", 00:15:58.518 "traddr": "10.0.0.2", 00:15:58.518 "trsvcid": "4420" 00:15:58.518 }, 00:15:58.518 "peer_address": { 00:15:58.518 "trtype": "TCP", 00:15:58.518 "adrfam": "IPv4", 00:15:58.518 "traddr": "10.0.0.1", 00:15:58.518 "trsvcid": "38846" 00:15:58.518 }, 00:15:58.518 "auth": { 00:15:58.518 "state": "completed", 00:15:58.518 "digest": "sha384", 00:15:58.518 "dhgroup": "null" 00:15:58.518 } 00:15:58.518 } 00:15:58.518 ]' 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.518 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.776 00:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:15:59.709 00:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.709 00:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.709 00:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.709 00:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.709 00:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.709 00:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:59.709 00:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.709 00:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.966 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:15:59.966 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:59.966 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:59.966 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:59.966 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:59.966 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:59.966 00:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.967 00:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.967 00:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.967 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:59.967 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:00.224 00:16:00.224 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:00.224 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.224 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:00.481 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.481 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.481 00:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.481 00:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.481 00:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.481 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:00.481 { 00:16:00.481 "cntlid": 51, 00:16:00.481 "qid": 0, 00:16:00.481 "state": "enabled", 00:16:00.481 "listen_address": { 00:16:00.481 "trtype": "TCP", 00:16:00.481 "adrfam": "IPv4", 00:16:00.481 "traddr": "10.0.0.2", 00:16:00.481 "trsvcid": "4420" 00:16:00.481 }, 00:16:00.481 "peer_address": { 00:16:00.481 "trtype": "TCP", 00:16:00.481 "adrfam": "IPv4", 00:16:00.481 "traddr": "10.0.0.1", 00:16:00.481 "trsvcid": "38880" 00:16:00.481 }, 00:16:00.481 "auth": { 00:16:00.481 "state": "completed", 00:16:00.481 "digest": "sha384", 00:16:00.481 "dhgroup": "null" 00:16:00.481 } 00:16:00.481 } 00:16:00.481 ]' 00:16:00.481 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:00.481 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.481 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:00.766 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:00.766 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:00.766 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.766 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.766 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.024 00:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:16:01.957 00:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.957 00:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.957 00:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:01.957 00:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.957 00:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:01.957 00:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:01.957 00:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:01.957 00:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:01.957 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:16:01.957 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:01.957 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:01.957 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:01.957 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:01.957 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:01.957 00:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:01.957 00:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.957 00:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:01.957 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:01.957 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:02.524 00:16:02.524 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:02.524 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.524 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:02.524 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.524 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.524 00:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:02.524 00:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.524 00:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:02.524 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:02.524 { 00:16:02.524 "cntlid": 53, 00:16:02.524 "qid": 0, 00:16:02.524 "state": "enabled", 00:16:02.524 "listen_address": { 00:16:02.524 "trtype": "TCP", 00:16:02.524 "adrfam": "IPv4", 00:16:02.524 "traddr": "10.0.0.2", 00:16:02.524 "trsvcid": "4420" 00:16:02.524 }, 00:16:02.524 "peer_address": { 00:16:02.524 "trtype": "TCP", 00:16:02.524 "adrfam": "IPv4", 00:16:02.524 "traddr": "10.0.0.1", 00:16:02.524 "trsvcid": "38920" 00:16:02.524 }, 00:16:02.524 "auth": { 00:16:02.524 "state": "completed", 00:16:02.524 "digest": "sha384", 00:16:02.524 "dhgroup": "null" 00:16:02.524 } 00:16:02.524 } 00:16:02.524 ]' 00:16:02.524 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:02.783 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.783 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:02.783 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:02.783 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:02.783 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.783 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.783 00:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.041 00:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:16:03.974 00:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.974 00:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.974 00:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.974 00:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.974 00:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.974 00:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:03.974 00:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:03.974 00:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:04.232 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:16:04.232 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:04.232 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:04.232 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:04.232 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:04.232 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:04.232 00:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:04.232 00:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.232 00:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:04.232 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.232 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.490 00:16:04.490 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:04.490 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:04.490 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:04.748 { 00:16:04.748 "cntlid": 55, 00:16:04.748 "qid": 0, 00:16:04.748 "state": "enabled", 00:16:04.748 "listen_address": { 00:16:04.748 "trtype": "TCP", 00:16:04.748 "adrfam": "IPv4", 00:16:04.748 "traddr": "10.0.0.2", 00:16:04.748 "trsvcid": "4420" 00:16:04.748 }, 00:16:04.748 "peer_address": { 00:16:04.748 "trtype": "TCP", 00:16:04.748 "adrfam": "IPv4", 00:16:04.748 "traddr": "10.0.0.1", 00:16:04.748 "trsvcid": "60268" 00:16:04.748 }, 00:16:04.748 "auth": { 00:16:04.748 "state": "completed", 00:16:04.748 "digest": "sha384", 00:16:04.748 "dhgroup": "null" 00:16:04.748 } 00:16:04.748 } 00:16:04.748 ]' 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.748 00:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.007 00:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:16:05.941 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.941 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.941 00:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:05.941 00:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.941 00:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:05.941 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.941 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:05.941 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.941 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:06.199 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:16:06.199 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:06.199 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:06.199 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:06.199 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:06.199 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:06.199 00:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.199 00:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.199 00:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.199 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:06.199 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:06.764 00:16:06.765 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:06.765 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:06.765 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.765 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.765 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.765 00:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:06.765 00:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.765 00:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:06.765 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:06.765 { 00:16:06.765 "cntlid": 57, 00:16:06.765 "qid": 0, 00:16:06.765 "state": "enabled", 00:16:06.765 "listen_address": { 00:16:06.765 "trtype": "TCP", 00:16:06.765 "adrfam": "IPv4", 00:16:06.765 "traddr": "10.0.0.2", 00:16:06.765 "trsvcid": "4420" 00:16:06.765 }, 00:16:06.765 "peer_address": { 00:16:06.765 "trtype": "TCP", 00:16:06.765 "adrfam": "IPv4", 00:16:06.765 "traddr": "10.0.0.1", 00:16:06.765 "trsvcid": "60296" 00:16:06.765 }, 00:16:06.765 "auth": { 00:16:06.765 "state": "completed", 00:16:06.765 "digest": "sha384", 00:16:06.765 "dhgroup": "ffdhe2048" 00:16:06.765 } 00:16:06.765 } 00:16:06.765 ]' 00:16:06.765 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:07.022 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.022 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:07.022 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.022 00:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:07.022 00:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.022 00:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.022 00:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.280 00:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:16:08.213 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.213 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.213 00:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.213 00:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.213 00:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.213 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:08.213 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.213 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.471 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:16:08.471 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:08.471 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:08.471 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:08.471 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:08.471 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:08.471 00:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.471 00:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.471 00:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.471 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:08.471 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:08.729 00:16:08.729 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:08.729 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.729 00:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:08.987 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.987 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.987 00:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.987 00:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.987 00:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.987 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:08.987 { 00:16:08.987 "cntlid": 59, 00:16:08.987 "qid": 0, 00:16:08.987 "state": "enabled", 00:16:08.987 "listen_address": { 00:16:08.987 "trtype": "TCP", 00:16:08.987 "adrfam": "IPv4", 00:16:08.987 "traddr": "10.0.0.2", 00:16:08.987 "trsvcid": "4420" 00:16:08.987 }, 00:16:08.987 "peer_address": { 00:16:08.987 "trtype": "TCP", 00:16:08.987 "adrfam": "IPv4", 00:16:08.987 "traddr": "10.0.0.1", 00:16:08.987 "trsvcid": "60328" 00:16:08.987 }, 00:16:08.987 "auth": { 00:16:08.987 "state": "completed", 00:16:08.987 "digest": "sha384", 00:16:08.987 "dhgroup": "ffdhe2048" 00:16:08.987 } 00:16:08.987 } 00:16:08.987 ]' 00:16:08.987 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:08.987 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.987 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:08.987 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:08.987 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:09.244 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.245 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.245 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.502 00:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:10.435 00:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.693 00:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:10.693 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:10.693 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:10.951 00:16:10.951 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:10.951 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:10.952 00:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:11.209 { 00:16:11.209 "cntlid": 61, 00:16:11.209 "qid": 0, 00:16:11.209 "state": "enabled", 00:16:11.209 "listen_address": { 00:16:11.209 "trtype": "TCP", 00:16:11.209 "adrfam": "IPv4", 00:16:11.209 "traddr": "10.0.0.2", 00:16:11.209 "trsvcid": "4420" 00:16:11.209 }, 00:16:11.209 "peer_address": { 00:16:11.209 "trtype": "TCP", 00:16:11.209 "adrfam": "IPv4", 00:16:11.209 "traddr": "10.0.0.1", 00:16:11.209 "trsvcid": "60366" 00:16:11.209 }, 00:16:11.209 "auth": { 00:16:11.209 "state": "completed", 00:16:11.209 "digest": "sha384", 00:16:11.209 "dhgroup": "ffdhe2048" 00:16:11.209 } 00:16:11.209 } 00:16:11.209 ]' 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.209 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.467 00:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:16:12.400 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.400 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.400 00:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:12.400 00:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.400 00:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:12.400 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:12.400 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:12.400 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:12.659 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:16:12.659 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:12.659 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:12.659 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:12.659 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:12.659 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:12.659 00:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:12.659 00:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.659 00:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:12.659 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.659 00:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.224 00:16:13.224 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:13.224 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:13.224 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:13.481 { 00:16:13.481 "cntlid": 63, 00:16:13.481 "qid": 0, 00:16:13.481 "state": "enabled", 00:16:13.481 "listen_address": { 00:16:13.481 "trtype": "TCP", 00:16:13.481 "adrfam": "IPv4", 00:16:13.481 "traddr": "10.0.0.2", 00:16:13.481 "trsvcid": "4420" 00:16:13.481 }, 00:16:13.481 "peer_address": { 00:16:13.481 "trtype": "TCP", 00:16:13.481 "adrfam": "IPv4", 00:16:13.481 "traddr": "10.0.0.1", 00:16:13.481 "trsvcid": "60398" 00:16:13.481 }, 00:16:13.481 "auth": { 00:16:13.481 "state": "completed", 00:16:13.481 "digest": "sha384", 00:16:13.481 "dhgroup": "ffdhe2048" 00:16:13.481 } 00:16:13.481 } 00:16:13.481 ]' 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.481 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.739 00:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:16:14.729 00:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.729 00:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.729 00:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:14.729 00:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.729 00:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:14.729 00:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.729 00:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:14.729 00:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:14.729 00:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:14.987 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:16:14.987 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:14.987 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:14.987 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:14.987 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:14.987 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:14.987 00:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:14.987 00:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.987 00:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:14.987 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:14.987 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:15.244 00:16:15.502 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:15.502 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:15.502 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:15.760 { 00:16:15.760 "cntlid": 65, 00:16:15.760 "qid": 0, 00:16:15.760 "state": "enabled", 00:16:15.760 "listen_address": { 00:16:15.760 "trtype": "TCP", 00:16:15.760 "adrfam": "IPv4", 00:16:15.760 "traddr": "10.0.0.2", 00:16:15.760 "trsvcid": "4420" 00:16:15.760 }, 00:16:15.760 "peer_address": { 00:16:15.760 "trtype": "TCP", 00:16:15.760 "adrfam": "IPv4", 00:16:15.760 "traddr": "10.0.0.1", 00:16:15.760 "trsvcid": "34994" 00:16:15.760 }, 00:16:15.760 "auth": { 00:16:15.760 "state": "completed", 00:16:15.760 "digest": "sha384", 00:16:15.760 "dhgroup": "ffdhe3072" 00:16:15.760 } 00:16:15.760 } 00:16:15.760 ]' 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.760 00:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.018 00:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:16:16.951 00:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.951 00:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.951 00:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:16.951 00:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.951 00:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:16.951 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:16.951 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:16.951 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:17.208 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:16:17.208 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:17.208 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:17.208 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:17.208 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:17.208 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:17.208 00:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.208 00:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.208 00:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.208 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:17.208 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:17.474 00:16:17.474 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:17.475 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:17.475 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.740 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.740 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.740 00:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.740 00:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.740 00:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.740 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:17.740 { 00:16:17.740 "cntlid": 67, 00:16:17.740 "qid": 0, 00:16:17.740 "state": "enabled", 00:16:17.740 "listen_address": { 00:16:17.740 "trtype": "TCP", 00:16:17.740 "adrfam": "IPv4", 00:16:17.740 "traddr": "10.0.0.2", 00:16:17.740 "trsvcid": "4420" 00:16:17.740 }, 00:16:17.740 "peer_address": { 00:16:17.740 "trtype": "TCP", 00:16:17.740 "adrfam": "IPv4", 00:16:17.740 "traddr": "10.0.0.1", 00:16:17.740 "trsvcid": "35022" 00:16:17.740 }, 00:16:17.740 "auth": { 00:16:17.740 "state": "completed", 00:16:17.740 "digest": "sha384", 00:16:17.740 "dhgroup": "ffdhe3072" 00:16:17.740 } 00:16:17.740 } 00:16:17.740 ]' 00:16:17.740 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:17.997 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.997 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:17.997 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:17.997 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:17.997 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.997 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.997 00:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.255 00:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:16:19.189 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.189 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.189 00:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.189 00:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.189 00:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.189 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:19.189 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:19.189 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:19.447 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:16:19.447 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:19.447 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:19.447 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:19.447 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:19.447 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:19.447 00:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.447 00:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.447 00:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.447 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:19.447 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:19.704 00:16:19.704 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:19.704 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:19.704 00:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.963 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.963 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.963 00:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.963 00:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.963 00:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:19.963 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:19.963 { 00:16:19.963 "cntlid": 69, 00:16:19.963 "qid": 0, 00:16:19.963 "state": "enabled", 00:16:19.963 "listen_address": { 00:16:19.963 "trtype": "TCP", 00:16:19.963 "adrfam": "IPv4", 00:16:19.963 "traddr": "10.0.0.2", 00:16:19.963 "trsvcid": "4420" 00:16:19.963 }, 00:16:19.963 "peer_address": { 00:16:19.963 "trtype": "TCP", 00:16:19.963 "adrfam": "IPv4", 00:16:19.963 "traddr": "10.0.0.1", 00:16:19.963 "trsvcid": "35040" 00:16:19.963 }, 00:16:19.963 "auth": { 00:16:19.963 "state": "completed", 00:16:19.963 "digest": "sha384", 00:16:19.963 "dhgroup": "ffdhe3072" 00:16:19.963 } 00:16:19.963 } 00:16:19.963 ]' 00:16:19.963 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:19.963 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.963 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:19.963 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:19.963 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:20.220 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.220 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.220 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.220 00:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:16:21.154 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.154 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.154 00:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.154 00:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.154 00:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.154 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:21.154 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:21.154 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:21.412 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:16:21.412 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:21.412 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:21.412 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:21.412 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:21.412 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:21.412 00:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.412 00:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.412 00:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.412 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.412 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.978 00:16:21.978 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:21.978 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:21.978 00:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:22.236 { 00:16:22.236 "cntlid": 71, 00:16:22.236 "qid": 0, 00:16:22.236 "state": "enabled", 00:16:22.236 "listen_address": { 00:16:22.236 "trtype": "TCP", 00:16:22.236 "adrfam": "IPv4", 00:16:22.236 "traddr": "10.0.0.2", 00:16:22.236 "trsvcid": "4420" 00:16:22.236 }, 00:16:22.236 "peer_address": { 00:16:22.236 "trtype": "TCP", 00:16:22.236 "adrfam": "IPv4", 00:16:22.236 "traddr": "10.0.0.1", 00:16:22.236 "trsvcid": "35080" 00:16:22.236 }, 00:16:22.236 "auth": { 00:16:22.236 "state": "completed", 00:16:22.236 "digest": "sha384", 00:16:22.236 "dhgroup": "ffdhe3072" 00:16:22.236 } 00:16:22.236 } 00:16:22.236 ]' 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.236 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.494 00:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:16:23.427 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.427 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.427 00:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.427 00:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.427 00:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.427 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.427 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:23.427 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:23.427 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:23.685 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:16:23.685 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:23.685 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:23.685 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:23.685 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:23.685 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:23.685 00:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.685 00:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.685 00:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.685 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:23.685 00:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:23.944 00:16:23.944 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:23.944 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:23.944 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.203 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.203 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.203 00:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.203 00:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.461 00:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.461 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:24.461 { 00:16:24.461 "cntlid": 73, 00:16:24.461 "qid": 0, 00:16:24.461 "state": "enabled", 00:16:24.461 "listen_address": { 00:16:24.461 "trtype": "TCP", 00:16:24.461 "adrfam": "IPv4", 00:16:24.461 "traddr": "10.0.0.2", 00:16:24.461 "trsvcid": "4420" 00:16:24.461 }, 00:16:24.461 "peer_address": { 00:16:24.461 "trtype": "TCP", 00:16:24.461 "adrfam": "IPv4", 00:16:24.461 "traddr": "10.0.0.1", 00:16:24.461 "trsvcid": "35102" 00:16:24.461 }, 00:16:24.461 "auth": { 00:16:24.461 "state": "completed", 00:16:24.461 "digest": "sha384", 00:16:24.461 "dhgroup": "ffdhe4096" 00:16:24.461 } 00:16:24.461 } 00:16:24.461 ]' 00:16:24.461 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:24.461 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.461 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:24.461 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:24.461 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:24.461 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.461 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.461 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.720 00:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:16:25.653 00:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.654 00:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.654 00:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:25.654 00:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.654 00:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:25.654 00:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:25.654 00:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:25.654 00:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:25.911 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:16:25.911 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:25.911 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:25.911 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:25.911 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:25.911 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:25.911 00:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:25.911 00:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.911 00:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:25.912 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:25.912 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:26.478 00:16:26.478 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:26.478 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:26.478 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:26.735 { 00:16:26.735 "cntlid": 75, 00:16:26.735 "qid": 0, 00:16:26.735 "state": "enabled", 00:16:26.735 "listen_address": { 00:16:26.735 "trtype": "TCP", 00:16:26.735 "adrfam": "IPv4", 00:16:26.735 "traddr": "10.0.0.2", 00:16:26.735 "trsvcid": "4420" 00:16:26.735 }, 00:16:26.735 "peer_address": { 00:16:26.735 "trtype": "TCP", 00:16:26.735 "adrfam": "IPv4", 00:16:26.735 "traddr": "10.0.0.1", 00:16:26.735 "trsvcid": "36614" 00:16:26.735 }, 00:16:26.735 "auth": { 00:16:26.735 "state": "completed", 00:16:26.735 "digest": "sha384", 00:16:26.735 "dhgroup": "ffdhe4096" 00:16:26.735 } 00:16:26.735 } 00:16:26.735 ]' 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.735 00:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.993 00:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:16:27.956 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.214 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.214 00:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.214 00:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.214 00:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.214 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:28.214 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:28.214 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:28.472 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:16:28.472 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:28.472 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:28.472 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:28.472 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:28.472 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:28.472 00:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.472 00:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.472 00:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.472 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:28.472 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:28.729 00:16:28.729 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:28.730 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.730 00:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:28.988 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.988 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.988 00:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.988 00:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.988 00:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.988 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:28.988 { 00:16:28.988 "cntlid": 77, 00:16:28.988 "qid": 0, 00:16:28.988 "state": "enabled", 00:16:28.988 "listen_address": { 00:16:28.988 "trtype": "TCP", 00:16:28.988 "adrfam": "IPv4", 00:16:28.988 "traddr": "10.0.0.2", 00:16:28.988 "trsvcid": "4420" 00:16:28.988 }, 00:16:28.988 "peer_address": { 00:16:28.988 "trtype": "TCP", 00:16:28.988 "adrfam": "IPv4", 00:16:28.988 "traddr": "10.0.0.1", 00:16:28.988 "trsvcid": "36628" 00:16:28.988 }, 00:16:28.988 "auth": { 00:16:28.988 "state": "completed", 00:16:28.988 "digest": "sha384", 00:16:28.988 "dhgroup": "ffdhe4096" 00:16:28.988 } 00:16:28.988 } 00:16:28.988 ]' 00:16:28.988 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:28.988 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.988 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:29.246 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:29.246 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:29.246 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.246 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.246 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.504 00:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:16:30.438 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.438 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.438 00:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.438 00:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.438 00:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.438 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:30.438 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:30.438 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:30.696 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:16:30.696 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:30.696 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:30.696 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:30.696 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:30.697 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:30.697 00:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.697 00:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.697 00:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.697 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.697 00:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.262 00:16:31.262 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:31.262 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:31.262 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:31.520 { 00:16:31.520 "cntlid": 79, 00:16:31.520 "qid": 0, 00:16:31.520 "state": "enabled", 00:16:31.520 "listen_address": { 00:16:31.520 "trtype": "TCP", 00:16:31.520 "adrfam": "IPv4", 00:16:31.520 "traddr": "10.0.0.2", 00:16:31.520 "trsvcid": "4420" 00:16:31.520 }, 00:16:31.520 "peer_address": { 00:16:31.520 "trtype": "TCP", 00:16:31.520 "adrfam": "IPv4", 00:16:31.520 "traddr": "10.0.0.1", 00:16:31.520 "trsvcid": "36668" 00:16:31.520 }, 00:16:31.520 "auth": { 00:16:31.520 "state": "completed", 00:16:31.520 "digest": "sha384", 00:16:31.520 "dhgroup": "ffdhe4096" 00:16:31.520 } 00:16:31.520 } 00:16:31.520 ]' 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.520 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.778 00:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:16:32.712 00:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.712 00:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.712 00:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.712 00:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.712 00:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.712 00:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.712 00:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:32.712 00:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:32.712 00:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:32.970 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:16:32.970 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:32.970 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:32.970 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:32.970 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:32.970 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:32.970 00:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.970 00:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.970 00:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.970 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:32.970 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:33.536 00:16:33.536 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:33.536 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:33.536 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.794 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.794 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.794 00:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.794 00:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.794 00:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.794 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:33.794 { 00:16:33.794 "cntlid": 81, 00:16:33.794 "qid": 0, 00:16:33.794 "state": "enabled", 00:16:33.794 "listen_address": { 00:16:33.794 "trtype": "TCP", 00:16:33.794 "adrfam": "IPv4", 00:16:33.794 "traddr": "10.0.0.2", 00:16:33.794 "trsvcid": "4420" 00:16:33.794 }, 00:16:33.794 "peer_address": { 00:16:33.794 "trtype": "TCP", 00:16:33.794 "adrfam": "IPv4", 00:16:33.794 "traddr": "10.0.0.1", 00:16:33.794 "trsvcid": "36692" 00:16:33.794 }, 00:16:33.794 "auth": { 00:16:33.794 "state": "completed", 00:16:33.794 "digest": "sha384", 00:16:33.794 "dhgroup": "ffdhe6144" 00:16:33.794 } 00:16:33.794 } 00:16:33.794 ]' 00:16:33.794 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:33.794 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.794 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:33.794 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.794 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:34.053 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.053 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.053 00:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.311 00:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:16:35.245 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.245 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.245 00:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:35.245 00:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.245 00:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:35.245 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:35.245 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:35.245 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:35.503 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:16:35.503 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:35.503 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:35.503 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:35.503 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.503 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:35.503 00:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:35.503 00:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.503 00:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:35.503 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:35.503 00:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:36.069 00:16:36.069 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:36.069 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:36.069 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:36.326 { 00:16:36.326 "cntlid": 83, 00:16:36.326 "qid": 0, 00:16:36.326 "state": "enabled", 00:16:36.326 "listen_address": { 00:16:36.326 "trtype": "TCP", 00:16:36.326 "adrfam": "IPv4", 00:16:36.326 "traddr": "10.0.0.2", 00:16:36.326 "trsvcid": "4420" 00:16:36.326 }, 00:16:36.326 "peer_address": { 00:16:36.326 "trtype": "TCP", 00:16:36.326 "adrfam": "IPv4", 00:16:36.326 "traddr": "10.0.0.1", 00:16:36.326 "trsvcid": "57724" 00:16:36.326 }, 00:16:36.326 "auth": { 00:16:36.326 "state": "completed", 00:16:36.326 "digest": "sha384", 00:16:36.326 "dhgroup": "ffdhe6144" 00:16:36.326 } 00:16:36.326 } 00:16:36.326 ]' 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.326 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.891 00:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:37.830 00:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.831 00:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:37.831 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:37.831 00:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:38.396 00:16:38.396 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:38.396 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:38.396 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.654 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.654 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.654 00:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:38.654 00:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.654 00:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:38.654 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:38.654 { 00:16:38.654 "cntlid": 85, 00:16:38.654 "qid": 0, 00:16:38.654 "state": "enabled", 00:16:38.654 "listen_address": { 00:16:38.654 "trtype": "TCP", 00:16:38.654 "adrfam": "IPv4", 00:16:38.654 "traddr": "10.0.0.2", 00:16:38.654 "trsvcid": "4420" 00:16:38.654 }, 00:16:38.654 "peer_address": { 00:16:38.654 "trtype": "TCP", 00:16:38.654 "adrfam": "IPv4", 00:16:38.654 "traddr": "10.0.0.1", 00:16:38.654 "trsvcid": "57744" 00:16:38.654 }, 00:16:38.654 "auth": { 00:16:38.654 "state": "completed", 00:16:38.654 "digest": "sha384", 00:16:38.654 "dhgroup": "ffdhe6144" 00:16:38.654 } 00:16:38.654 } 00:16:38.654 ]' 00:16:38.654 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:38.912 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.912 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:38.912 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.912 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:38.912 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.912 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.912 00:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.169 00:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:16:40.103 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.103 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.103 00:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:40.103 00:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.103 00:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:40.103 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:40.103 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.103 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:40.361 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:16:40.361 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:40.361 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.361 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:40.362 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.362 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:40.362 00:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:40.362 00:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.362 00:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:40.362 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.362 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.927 00:16:40.927 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:40.927 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:40.927 00:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:41.185 { 00:16:41.185 "cntlid": 87, 00:16:41.185 "qid": 0, 00:16:41.185 "state": "enabled", 00:16:41.185 "listen_address": { 00:16:41.185 "trtype": "TCP", 00:16:41.185 "adrfam": "IPv4", 00:16:41.185 "traddr": "10.0.0.2", 00:16:41.185 "trsvcid": "4420" 00:16:41.185 }, 00:16:41.185 "peer_address": { 00:16:41.185 "trtype": "TCP", 00:16:41.185 "adrfam": "IPv4", 00:16:41.185 "traddr": "10.0.0.1", 00:16:41.185 "trsvcid": "57780" 00:16:41.185 }, 00:16:41.185 "auth": { 00:16:41.185 "state": "completed", 00:16:41.185 "digest": "sha384", 00:16:41.185 "dhgroup": "ffdhe6144" 00:16:41.185 } 00:16:41.185 } 00:16:41.185 ]' 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.185 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.445 00:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:16:42.421 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.421 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.421 00:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:42.421 00:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.421 00:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:42.421 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.421 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:42.421 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:42.421 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:42.679 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:16:42.679 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:42.679 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:42.679 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:42.679 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:42.679 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:42.679 00:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:42.680 00:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.680 00:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:42.680 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:42.680 00:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:43.614 00:16:43.614 00:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:43.614 00:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.614 00:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:43.872 00:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.872 00:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.872 00:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:43.872 00:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.872 00:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:43.872 00:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:43.872 { 00:16:43.872 "cntlid": 89, 00:16:43.872 "qid": 0, 00:16:43.872 "state": "enabled", 00:16:43.872 "listen_address": { 00:16:43.872 "trtype": "TCP", 00:16:43.872 "adrfam": "IPv4", 00:16:43.872 "traddr": "10.0.0.2", 00:16:43.872 "trsvcid": "4420" 00:16:43.872 }, 00:16:43.872 "peer_address": { 00:16:43.872 "trtype": "TCP", 00:16:43.872 "adrfam": "IPv4", 00:16:43.872 "traddr": "10.0.0.1", 00:16:43.872 "trsvcid": "57792" 00:16:43.872 }, 00:16:43.872 "auth": { 00:16:43.872 "state": "completed", 00:16:43.872 "digest": "sha384", 00:16:43.872 "dhgroup": "ffdhe8192" 00:16:43.872 } 00:16:43.872 } 00:16:43.872 ]' 00:16:43.872 00:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:43.872 00:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.872 00:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:43.872 00:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.872 00:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:44.130 00:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.130 00:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.130 00:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.388 00:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:16:45.321 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.321 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.321 00:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.321 00:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.321 00:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.321 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:45.321 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:45.321 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:45.578 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:16:45.578 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:45.578 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:45.578 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:45.578 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:45.579 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:45.579 00:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.579 00:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.579 00:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.579 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:45.579 00:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:46.510 00:16:46.510 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:46.510 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:46.510 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.510 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.510 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.510 00:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.510 00:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.510 00:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.510 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:46.510 { 00:16:46.510 "cntlid": 91, 00:16:46.510 "qid": 0, 00:16:46.510 "state": "enabled", 00:16:46.510 "listen_address": { 00:16:46.510 "trtype": "TCP", 00:16:46.510 "adrfam": "IPv4", 00:16:46.510 "traddr": "10.0.0.2", 00:16:46.510 "trsvcid": "4420" 00:16:46.510 }, 00:16:46.510 "peer_address": { 00:16:46.510 "trtype": "TCP", 00:16:46.510 "adrfam": "IPv4", 00:16:46.510 "traddr": "10.0.0.1", 00:16:46.510 "trsvcid": "47752" 00:16:46.510 }, 00:16:46.510 "auth": { 00:16:46.510 "state": "completed", 00:16:46.510 "digest": "sha384", 00:16:46.510 "dhgroup": "ffdhe8192" 00:16:46.510 } 00:16:46.510 } 00:16:46.510 ]' 00:16:46.510 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:46.768 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.768 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:46.768 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.768 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:46.768 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.768 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.768 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.026 00:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:16:47.957 00:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.957 00:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.957 00:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:47.957 00:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.957 00:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:47.957 00:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:47.957 00:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:47.957 00:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.214 00:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:16:48.214 00:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:48.214 00:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.214 00:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:48.214 00:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:48.214 00:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:48.214 00:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:48.214 00:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.214 00:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:48.214 00:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:48.214 00:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:49.147 00:16:49.147 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:49.147 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:49.147 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:49.405 { 00:16:49.405 "cntlid": 93, 00:16:49.405 "qid": 0, 00:16:49.405 "state": "enabled", 00:16:49.405 "listen_address": { 00:16:49.405 "trtype": "TCP", 00:16:49.405 "adrfam": "IPv4", 00:16:49.405 "traddr": "10.0.0.2", 00:16:49.405 "trsvcid": "4420" 00:16:49.405 }, 00:16:49.405 "peer_address": { 00:16:49.405 "trtype": "TCP", 00:16:49.405 "adrfam": "IPv4", 00:16:49.405 "traddr": "10.0.0.1", 00:16:49.405 "trsvcid": "47784" 00:16:49.405 }, 00:16:49.405 "auth": { 00:16:49.405 "state": "completed", 00:16:49.405 "digest": "sha384", 00:16:49.405 "dhgroup": "ffdhe8192" 00:16:49.405 } 00:16:49.405 } 00:16:49.405 ]' 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.405 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.663 00:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:16:50.596 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.596 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.596 00:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:50.596 00:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.596 00:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:50.597 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:50.597 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.597 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.854 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:16:50.854 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:50.854 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:50.854 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:50.854 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:50.854 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:50.854 00:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:50.854 00:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.854 00:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:50.854 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.854 00:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.788 00:16:51.788 00:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:51.788 00:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:51.788 00:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:52.046 { 00:16:52.046 "cntlid": 95, 00:16:52.046 "qid": 0, 00:16:52.046 "state": "enabled", 00:16:52.046 "listen_address": { 00:16:52.046 "trtype": "TCP", 00:16:52.046 "adrfam": "IPv4", 00:16:52.046 "traddr": "10.0.0.2", 00:16:52.046 "trsvcid": "4420" 00:16:52.046 }, 00:16:52.046 "peer_address": { 00:16:52.046 "trtype": "TCP", 00:16:52.046 "adrfam": "IPv4", 00:16:52.046 "traddr": "10.0.0.1", 00:16:52.046 "trsvcid": "47808" 00:16:52.046 }, 00:16:52.046 "auth": { 00:16:52.046 "state": "completed", 00:16:52.046 "digest": "sha384", 00:16:52.046 "dhgroup": "ffdhe8192" 00:16:52.046 } 00:16:52.046 } 00:16:52.046 ]' 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.046 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.304 00:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:16:53.237 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.237 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.237 00:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.237 00:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.237 00:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.237 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:16:53.237 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.237 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:53.237 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:53.237 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:53.495 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:16:53.495 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:53.495 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:53.495 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:53.495 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:53.495 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:16:53.495 00:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.495 00:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.495 00:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.495 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:53.495 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:53.752 00:16:53.752 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:53.752 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:53.752 00:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:54.319 { 00:16:54.319 "cntlid": 97, 00:16:54.319 "qid": 0, 00:16:54.319 "state": "enabled", 00:16:54.319 "listen_address": { 00:16:54.319 "trtype": "TCP", 00:16:54.319 "adrfam": "IPv4", 00:16:54.319 "traddr": "10.0.0.2", 00:16:54.319 "trsvcid": "4420" 00:16:54.319 }, 00:16:54.319 "peer_address": { 00:16:54.319 "trtype": "TCP", 00:16:54.319 "adrfam": "IPv4", 00:16:54.319 "traddr": "10.0.0.1", 00:16:54.319 "trsvcid": "47836" 00:16:54.319 }, 00:16:54.319 "auth": { 00:16:54.319 "state": "completed", 00:16:54.319 "digest": "sha512", 00:16:54.319 "dhgroup": "null" 00:16:54.319 } 00:16:54.319 } 00:16:54.319 ]' 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.319 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.576 00:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:16:55.537 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.537 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.537 00:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.537 00:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.537 00:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.537 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:55.537 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:55.537 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:55.796 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:16:55.796 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:55.796 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:55.796 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:55.796 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:55.796 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:55.796 00:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.796 00:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.796 00:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.796 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:55.796 00:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:56.055 00:16:56.055 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:56.055 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.055 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:56.313 { 00:16:56.313 "cntlid": 99, 00:16:56.313 "qid": 0, 00:16:56.313 "state": "enabled", 00:16:56.313 "listen_address": { 00:16:56.313 "trtype": "TCP", 00:16:56.313 "adrfam": "IPv4", 00:16:56.313 "traddr": "10.0.0.2", 00:16:56.313 "trsvcid": "4420" 00:16:56.313 }, 00:16:56.313 "peer_address": { 00:16:56.313 "trtype": "TCP", 00:16:56.313 "adrfam": "IPv4", 00:16:56.313 "traddr": "10.0.0.1", 00:16:56.313 "trsvcid": "59856" 00:16:56.313 }, 00:16:56.313 "auth": { 00:16:56.313 "state": "completed", 00:16:56.313 "digest": "sha512", 00:16:56.313 "dhgroup": "null" 00:16:56.313 } 00:16:56.313 } 00:16:56.313 ]' 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.313 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.571 00:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:16:57.508 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.509 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.509 00:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.509 00:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.509 00:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.509 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:57.509 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.509 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.769 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:16:57.769 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:57.769 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.769 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:57.769 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:57.769 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:16:57.769 00:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.769 00:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.769 00:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.769 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:57.769 00:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:58.028 00:16:58.286 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:58.286 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:58.286 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.286 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.286 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.286 00:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.286 00:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.543 00:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.543 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:58.543 { 00:16:58.543 "cntlid": 101, 00:16:58.543 "qid": 0, 00:16:58.543 "state": "enabled", 00:16:58.543 "listen_address": { 00:16:58.543 "trtype": "TCP", 00:16:58.543 "adrfam": "IPv4", 00:16:58.543 "traddr": "10.0.0.2", 00:16:58.543 "trsvcid": "4420" 00:16:58.543 }, 00:16:58.543 "peer_address": { 00:16:58.543 "trtype": "TCP", 00:16:58.543 "adrfam": "IPv4", 00:16:58.543 "traddr": "10.0.0.1", 00:16:58.543 "trsvcid": "59892" 00:16:58.543 }, 00:16:58.543 "auth": { 00:16:58.543 "state": "completed", 00:16:58.543 "digest": "sha512", 00:16:58.543 "dhgroup": "null" 00:16:58.543 } 00:16:58.543 } 00:16:58.543 ]' 00:16:58.543 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:58.543 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.543 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:58.543 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:58.543 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:58.543 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.543 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.543 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.802 00:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:16:59.735 00:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.735 00:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.735 00:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.735 00:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.735 00:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.735 00:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:59.735 00:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.735 00:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:59.994 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:16:59.994 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:59.994 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.994 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:59.994 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.994 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:59.994 00:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.994 00:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.994 00:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.994 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.994 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.252 00:17:00.252 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:00.252 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:00.252 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.510 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.510 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.510 00:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:00.510 00:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.510 00:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:00.510 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:00.510 { 00:17:00.510 "cntlid": 103, 00:17:00.510 "qid": 0, 00:17:00.510 "state": "enabled", 00:17:00.510 "listen_address": { 00:17:00.510 "trtype": "TCP", 00:17:00.510 "adrfam": "IPv4", 00:17:00.510 "traddr": "10.0.0.2", 00:17:00.510 "trsvcid": "4420" 00:17:00.510 }, 00:17:00.510 "peer_address": { 00:17:00.510 "trtype": "TCP", 00:17:00.510 "adrfam": "IPv4", 00:17:00.510 "traddr": "10.0.0.1", 00:17:00.510 "trsvcid": "59912" 00:17:00.510 }, 00:17:00.510 "auth": { 00:17:00.510 "state": "completed", 00:17:00.510 "digest": "sha512", 00:17:00.510 "dhgroup": "null" 00:17:00.510 } 00:17:00.510 } 00:17:00.510 ]' 00:17:00.510 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:00.510 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.510 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:00.768 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:00.768 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:00.768 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.768 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.768 00:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.025 00:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:17:01.958 00:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.958 00:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.958 00:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.958 00:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.958 00:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.958 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.958 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:01.958 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.958 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:02.215 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:17:02.215 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:02.215 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.215 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:02.215 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:02.215 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:17:02.215 00:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.215 00:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.215 00:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.216 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:02.216 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:02.781 00:17:02.781 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:02.781 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:02.781 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.781 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.781 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.781 00:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.781 00:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.781 00:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.781 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:02.781 { 00:17:02.781 "cntlid": 105, 00:17:02.781 "qid": 0, 00:17:02.781 "state": "enabled", 00:17:02.781 "listen_address": { 00:17:02.781 "trtype": "TCP", 00:17:02.781 "adrfam": "IPv4", 00:17:02.781 "traddr": "10.0.0.2", 00:17:02.781 "trsvcid": "4420" 00:17:02.781 }, 00:17:02.781 "peer_address": { 00:17:02.781 "trtype": "TCP", 00:17:02.781 "adrfam": "IPv4", 00:17:02.781 "traddr": "10.0.0.1", 00:17:02.781 "trsvcid": "59936" 00:17:02.781 }, 00:17:02.781 "auth": { 00:17:02.781 "state": "completed", 00:17:02.781 "digest": "sha512", 00:17:02.781 "dhgroup": "ffdhe2048" 00:17:02.781 } 00:17:02.781 } 00:17:02.781 ]' 00:17:02.782 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:03.040 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.040 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:03.040 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.040 00:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:03.040 00:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.040 00:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.040 00:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.298 00:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:17:04.230 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.230 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.230 00:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.230 00:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.230 00:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.230 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:04.230 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.230 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.488 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:17:04.488 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:04.488 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:04.488 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:04.488 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:04.488 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:04.488 00:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:04.488 00:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.488 00:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:04.488 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:04.488 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:04.980 00:17:04.980 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:04.980 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:04.980 00:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:05.238 { 00:17:05.238 "cntlid": 107, 00:17:05.238 "qid": 0, 00:17:05.238 "state": "enabled", 00:17:05.238 "listen_address": { 00:17:05.238 "trtype": "TCP", 00:17:05.238 "adrfam": "IPv4", 00:17:05.238 "traddr": "10.0.0.2", 00:17:05.238 "trsvcid": "4420" 00:17:05.238 }, 00:17:05.238 "peer_address": { 00:17:05.238 "trtype": "TCP", 00:17:05.238 "adrfam": "IPv4", 00:17:05.238 "traddr": "10.0.0.1", 00:17:05.238 "trsvcid": "57120" 00:17:05.238 }, 00:17:05.238 "auth": { 00:17:05.238 "state": "completed", 00:17:05.238 "digest": "sha512", 00:17:05.238 "dhgroup": "ffdhe2048" 00:17:05.238 } 00:17:05.238 } 00:17:05.238 ]' 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.238 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.494 00:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:17:06.426 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.426 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.426 00:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:06.426 00:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.426 00:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:06.426 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:06.426 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:06.426 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:06.684 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:17:06.684 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:06.684 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:06.684 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:06.684 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:06.684 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:17:06.684 00:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:06.684 00:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.684 00:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:06.684 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:06.684 00:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:06.942 00:17:06.942 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:06.942 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.942 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:07.200 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.200 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.200 00:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.200 00:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.200 00:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.200 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:07.200 { 00:17:07.200 "cntlid": 109, 00:17:07.200 "qid": 0, 00:17:07.200 "state": "enabled", 00:17:07.200 "listen_address": { 00:17:07.200 "trtype": "TCP", 00:17:07.200 "adrfam": "IPv4", 00:17:07.200 "traddr": "10.0.0.2", 00:17:07.200 "trsvcid": "4420" 00:17:07.200 }, 00:17:07.200 "peer_address": { 00:17:07.200 "trtype": "TCP", 00:17:07.200 "adrfam": "IPv4", 00:17:07.200 "traddr": "10.0.0.1", 00:17:07.200 "trsvcid": "57162" 00:17:07.200 }, 00:17:07.200 "auth": { 00:17:07.200 "state": "completed", 00:17:07.200 "digest": "sha512", 00:17:07.200 "dhgroup": "ffdhe2048" 00:17:07.200 } 00:17:07.200 } 00:17:07.200 ]' 00:17:07.200 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:07.458 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.458 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:07.458 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.458 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:07.458 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.458 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.458 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.716 00:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:17:08.649 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.649 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.649 00:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.649 00:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.649 00:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.649 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:08.649 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.649 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.907 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:17:08.907 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:08.907 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:08.907 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:08.907 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:08.907 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:08.907 00:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.907 00:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.907 00:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.907 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.907 00:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.194 00:17:09.194 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:09.194 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:09.195 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:09.455 { 00:17:09.455 "cntlid": 111, 00:17:09.455 "qid": 0, 00:17:09.455 "state": "enabled", 00:17:09.455 "listen_address": { 00:17:09.455 "trtype": "TCP", 00:17:09.455 "adrfam": "IPv4", 00:17:09.455 "traddr": "10.0.0.2", 00:17:09.455 "trsvcid": "4420" 00:17:09.455 }, 00:17:09.455 "peer_address": { 00:17:09.455 "trtype": "TCP", 00:17:09.455 "adrfam": "IPv4", 00:17:09.455 "traddr": "10.0.0.1", 00:17:09.455 "trsvcid": "57184" 00:17:09.455 }, 00:17:09.455 "auth": { 00:17:09.455 "state": "completed", 00:17:09.455 "digest": "sha512", 00:17:09.455 "dhgroup": "ffdhe2048" 00:17:09.455 } 00:17:09.455 } 00:17:09.455 ]' 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.455 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.713 00:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:17:10.647 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.647 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.647 00:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.647 00:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.647 00:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.647 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.647 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:10.647 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:10.647 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:10.905 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:17:10.905 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:10.905 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:10.905 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:10.905 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.905 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:17:10.905 00:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.905 00:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.905 00:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.905 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:10.905 00:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:11.162 00:17:11.162 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:11.162 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:11.162 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.421 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.421 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.421 00:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.421 00:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.421 00:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.421 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:11.421 { 00:17:11.421 "cntlid": 113, 00:17:11.421 "qid": 0, 00:17:11.421 "state": "enabled", 00:17:11.421 "listen_address": { 00:17:11.421 "trtype": "TCP", 00:17:11.421 "adrfam": "IPv4", 00:17:11.421 "traddr": "10.0.0.2", 00:17:11.421 "trsvcid": "4420" 00:17:11.421 }, 00:17:11.421 "peer_address": { 00:17:11.421 "trtype": "TCP", 00:17:11.421 "adrfam": "IPv4", 00:17:11.421 "traddr": "10.0.0.1", 00:17:11.421 "trsvcid": "57204" 00:17:11.421 }, 00:17:11.421 "auth": { 00:17:11.421 "state": "completed", 00:17:11.421 "digest": "sha512", 00:17:11.421 "dhgroup": "ffdhe3072" 00:17:11.421 } 00:17:11.421 } 00:17:11.421 ]' 00:17:11.421 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:11.679 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.679 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:11.679 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.679 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:11.679 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.679 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.679 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.937 00:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:17:12.870 00:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.870 00:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.870 00:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.870 00:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.870 00:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.870 00:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:12.870 00:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.870 00:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.128 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:17:13.128 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:13.128 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.128 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:13.128 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:13.128 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:13.128 00:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.128 00:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.128 00:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.128 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:13.128 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:13.386 00:17:13.386 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:13.386 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.386 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:13.644 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.644 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.644 00:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.644 00:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.644 00:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.644 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:13.644 { 00:17:13.644 "cntlid": 115, 00:17:13.644 "qid": 0, 00:17:13.644 "state": "enabled", 00:17:13.644 "listen_address": { 00:17:13.644 "trtype": "TCP", 00:17:13.644 "adrfam": "IPv4", 00:17:13.644 "traddr": "10.0.0.2", 00:17:13.644 "trsvcid": "4420" 00:17:13.644 }, 00:17:13.644 "peer_address": { 00:17:13.644 "trtype": "TCP", 00:17:13.644 "adrfam": "IPv4", 00:17:13.644 "traddr": "10.0.0.1", 00:17:13.644 "trsvcid": "57230" 00:17:13.644 }, 00:17:13.644 "auth": { 00:17:13.644 "state": "completed", 00:17:13.644 "digest": "sha512", 00:17:13.644 "dhgroup": "ffdhe3072" 00:17:13.644 } 00:17:13.644 } 00:17:13.644 ]' 00:17:13.644 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:13.902 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.902 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:13.902 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.902 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:13.902 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.902 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.902 00:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.160 00:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:17:15.091 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.091 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.091 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.091 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.091 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.091 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:15.091 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.091 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.347 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:17:15.347 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:15.347 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.347 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:15.348 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:15.348 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:17:15.348 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.348 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.348 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.348 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:15.348 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:15.605 00:17:15.605 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:15.605 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:15.605 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.862 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.862 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.862 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.862 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.862 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.862 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:15.862 { 00:17:15.862 "cntlid": 117, 00:17:15.862 "qid": 0, 00:17:15.862 "state": "enabled", 00:17:15.862 "listen_address": { 00:17:15.862 "trtype": "TCP", 00:17:15.862 "adrfam": "IPv4", 00:17:15.862 "traddr": "10.0.0.2", 00:17:15.862 "trsvcid": "4420" 00:17:15.862 }, 00:17:15.862 "peer_address": { 00:17:15.862 "trtype": "TCP", 00:17:15.862 "adrfam": "IPv4", 00:17:15.862 "traddr": "10.0.0.1", 00:17:15.862 "trsvcid": "54378" 00:17:15.862 }, 00:17:15.862 "auth": { 00:17:15.862 "state": "completed", 00:17:15.862 "digest": "sha512", 00:17:15.862 "dhgroup": "ffdhe3072" 00:17:15.862 } 00:17:15.862 } 00:17:15.862 ]' 00:17:15.862 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:15.862 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.862 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:15.862 00:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.862 00:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:16.119 00:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.119 00:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.119 00:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.377 00:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.309 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.874 00:17:17.874 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:17.874 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:17.874 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.131 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.131 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.131 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.131 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.131 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.131 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:18.131 { 00:17:18.131 "cntlid": 119, 00:17:18.131 "qid": 0, 00:17:18.131 "state": "enabled", 00:17:18.131 "listen_address": { 00:17:18.131 "trtype": "TCP", 00:17:18.131 "adrfam": "IPv4", 00:17:18.131 "traddr": "10.0.0.2", 00:17:18.131 "trsvcid": "4420" 00:17:18.131 }, 00:17:18.131 "peer_address": { 00:17:18.131 "trtype": "TCP", 00:17:18.131 "adrfam": "IPv4", 00:17:18.131 "traddr": "10.0.0.1", 00:17:18.131 "trsvcid": "54386" 00:17:18.131 }, 00:17:18.131 "auth": { 00:17:18.131 "state": "completed", 00:17:18.131 "digest": "sha512", 00:17:18.131 "dhgroup": "ffdhe3072" 00:17:18.131 } 00:17:18.131 } 00:17:18.132 ]' 00:17:18.132 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:18.132 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.132 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:18.132 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.132 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:18.132 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.132 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.132 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.389 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:17:19.321 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.321 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.321 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.321 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.321 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.321 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.321 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:19.321 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.321 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.579 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:17:19.579 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:19.579 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:19.579 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:19.579 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:19.579 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:17:19.579 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.579 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.579 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.579 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:19.579 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:19.836 00:17:20.093 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:20.093 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.093 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:20.093 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.093 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.093 00:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:20.093 00:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.093 00:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:20.093 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:20.093 { 00:17:20.093 "cntlid": 121, 00:17:20.093 "qid": 0, 00:17:20.093 "state": "enabled", 00:17:20.093 "listen_address": { 00:17:20.093 "trtype": "TCP", 00:17:20.093 "adrfam": "IPv4", 00:17:20.093 "traddr": "10.0.0.2", 00:17:20.093 "trsvcid": "4420" 00:17:20.093 }, 00:17:20.093 "peer_address": { 00:17:20.093 "trtype": "TCP", 00:17:20.093 "adrfam": "IPv4", 00:17:20.093 "traddr": "10.0.0.1", 00:17:20.093 "trsvcid": "54400" 00:17:20.093 }, 00:17:20.093 "auth": { 00:17:20.093 "state": "completed", 00:17:20.093 "digest": "sha512", 00:17:20.093 "dhgroup": "ffdhe4096" 00:17:20.093 } 00:17:20.093 } 00:17:20.093 ]' 00:17:20.093 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:20.350 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.350 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:20.350 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.350 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:20.350 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.350 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.350 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.606 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:17:21.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.536 00:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.536 00:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.536 00:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:21.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.793 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:17:21.793 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:21.793 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:21.793 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:21.793 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:21.793 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:21.793 00:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.793 00:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.793 00:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.793 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:21.793 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:22.359 00:17:22.359 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:22.359 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:22.359 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.359 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.359 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.359 00:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:22.359 00:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.359 00:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:22.359 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:22.359 { 00:17:22.359 "cntlid": 123, 00:17:22.359 "qid": 0, 00:17:22.359 "state": "enabled", 00:17:22.359 "listen_address": { 00:17:22.359 "trtype": "TCP", 00:17:22.359 "adrfam": "IPv4", 00:17:22.359 "traddr": "10.0.0.2", 00:17:22.359 "trsvcid": "4420" 00:17:22.359 }, 00:17:22.359 "peer_address": { 00:17:22.359 "trtype": "TCP", 00:17:22.359 "adrfam": "IPv4", 00:17:22.359 "traddr": "10.0.0.1", 00:17:22.359 "trsvcid": "54422" 00:17:22.359 }, 00:17:22.359 "auth": { 00:17:22.359 "state": "completed", 00:17:22.359 "digest": "sha512", 00:17:22.359 "dhgroup": "ffdhe4096" 00:17:22.359 } 00:17:22.359 } 00:17:22.359 ]' 00:17:22.359 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:22.625 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.625 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:22.625 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.625 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:22.625 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.625 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.625 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.951 00:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:17:23.884 00:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.885 00:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.885 00:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.885 00:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.885 00:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.885 00:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:23.885 00:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.885 00:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.143 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:17:24.143 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:24.143 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.143 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:24.143 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.143 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:17:24.143 00:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.143 00:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.143 00:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.143 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:24.143 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:24.401 00:17:24.401 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:24.401 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:24.401 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.659 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.659 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.659 00:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.659 00:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.659 00:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.660 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:24.660 { 00:17:24.660 "cntlid": 125, 00:17:24.660 "qid": 0, 00:17:24.660 "state": "enabled", 00:17:24.660 "listen_address": { 00:17:24.660 "trtype": "TCP", 00:17:24.660 "adrfam": "IPv4", 00:17:24.660 "traddr": "10.0.0.2", 00:17:24.660 "trsvcid": "4420" 00:17:24.660 }, 00:17:24.660 "peer_address": { 00:17:24.660 "trtype": "TCP", 00:17:24.660 "adrfam": "IPv4", 00:17:24.660 "traddr": "10.0.0.1", 00:17:24.660 "trsvcid": "41924" 00:17:24.660 }, 00:17:24.660 "auth": { 00:17:24.660 "state": "completed", 00:17:24.660 "digest": "sha512", 00:17:24.660 "dhgroup": "ffdhe4096" 00:17:24.660 } 00:17:24.660 } 00:17:24.660 ]' 00:17:24.660 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:24.917 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.917 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:24.917 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.917 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:24.917 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.917 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.917 00:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.175 00:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:17:26.108 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.108 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:26.108 00:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.108 00:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.108 00:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.108 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:26.108 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.108 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.366 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:17:26.366 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:26.366 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:26.366 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:26.366 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:26.366 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:26.366 00:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.366 00:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.366 00:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.366 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.366 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.932 00:17:26.932 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:26.932 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:26.932 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.932 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.932 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.932 00:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.932 00:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.190 00:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.190 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:27.190 { 00:17:27.190 "cntlid": 127, 00:17:27.190 "qid": 0, 00:17:27.190 "state": "enabled", 00:17:27.190 "listen_address": { 00:17:27.190 "trtype": "TCP", 00:17:27.190 "adrfam": "IPv4", 00:17:27.190 "traddr": "10.0.0.2", 00:17:27.190 "trsvcid": "4420" 00:17:27.190 }, 00:17:27.190 "peer_address": { 00:17:27.190 "trtype": "TCP", 00:17:27.190 "adrfam": "IPv4", 00:17:27.190 "traddr": "10.0.0.1", 00:17:27.190 "trsvcid": "41948" 00:17:27.190 }, 00:17:27.190 "auth": { 00:17:27.190 "state": "completed", 00:17:27.190 "digest": "sha512", 00:17:27.190 "dhgroup": "ffdhe4096" 00:17:27.190 } 00:17:27.190 } 00:17:27.190 ]' 00:17:27.190 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:27.190 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.190 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:27.190 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.190 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:27.190 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.190 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.190 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.446 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:17:28.378 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.378 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.378 00:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.378 00:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.378 00:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.378 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.378 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:28.378 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.378 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.636 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:17:28.636 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:28.636 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:28.636 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:28.636 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:28.636 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:17:28.636 00:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.636 00:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.636 00:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.636 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:28.636 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:29.202 00:17:29.202 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:29.202 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:29.202 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.461 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.461 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.461 00:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.461 00:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.461 00:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.461 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:29.461 { 00:17:29.461 "cntlid": 129, 00:17:29.461 "qid": 0, 00:17:29.461 "state": "enabled", 00:17:29.461 "listen_address": { 00:17:29.461 "trtype": "TCP", 00:17:29.461 "adrfam": "IPv4", 00:17:29.461 "traddr": "10.0.0.2", 00:17:29.461 "trsvcid": "4420" 00:17:29.461 }, 00:17:29.461 "peer_address": { 00:17:29.461 "trtype": "TCP", 00:17:29.461 "adrfam": "IPv4", 00:17:29.461 "traddr": "10.0.0.1", 00:17:29.461 "trsvcid": "41972" 00:17:29.461 }, 00:17:29.461 "auth": { 00:17:29.461 "state": "completed", 00:17:29.461 "digest": "sha512", 00:17:29.461 "dhgroup": "ffdhe6144" 00:17:29.461 } 00:17:29.461 } 00:17:29.461 ]' 00:17:29.461 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:29.461 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.461 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:29.719 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.719 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:29.719 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.719 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.719 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.977 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:17:30.910 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.910 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:30.910 00:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.910 00:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.910 00:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.910 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:30.910 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.910 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.169 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:17:31.169 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:31.169 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.169 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:31.169 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:31.169 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:31.169 00:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.169 00:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.169 00:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.169 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:31.169 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:31.735 00:17:31.735 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:31.735 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:31.735 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.994 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.994 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.994 00:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.994 00:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.994 00:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.994 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:31.994 { 00:17:31.994 "cntlid": 131, 00:17:31.994 "qid": 0, 00:17:31.994 "state": "enabled", 00:17:31.994 "listen_address": { 00:17:31.994 "trtype": "TCP", 00:17:31.994 "adrfam": "IPv4", 00:17:31.994 "traddr": "10.0.0.2", 00:17:31.994 "trsvcid": "4420" 00:17:31.994 }, 00:17:31.994 "peer_address": { 00:17:31.994 "trtype": "TCP", 00:17:31.994 "adrfam": "IPv4", 00:17:31.994 "traddr": "10.0.0.1", 00:17:31.994 "trsvcid": "41990" 00:17:31.994 }, 00:17:31.994 "auth": { 00:17:31.994 "state": "completed", 00:17:31.994 "digest": "sha512", 00:17:31.994 "dhgroup": "ffdhe6144" 00:17:31.994 } 00:17:31.994 } 00:17:31.994 ]' 00:17:31.994 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:31.994 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.994 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:31.994 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.994 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:31.994 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.994 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.994 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.252 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:17:33.185 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.185 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:33.185 00:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.185 00:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.185 00:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.185 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:33.185 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.185 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.442 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:17:33.442 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:33.442 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.442 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:33.442 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:33.442 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:17:33.442 00:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.442 00:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.442 00:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.442 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:33.442 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:34.008 00:17:34.008 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:34.008 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:34.008 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.267 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.267 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.267 00:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.267 00:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.267 00:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.267 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:34.267 { 00:17:34.267 "cntlid": 133, 00:17:34.267 "qid": 0, 00:17:34.267 "state": "enabled", 00:17:34.267 "listen_address": { 00:17:34.267 "trtype": "TCP", 00:17:34.267 "adrfam": "IPv4", 00:17:34.267 "traddr": "10.0.0.2", 00:17:34.267 "trsvcid": "4420" 00:17:34.267 }, 00:17:34.267 "peer_address": { 00:17:34.267 "trtype": "TCP", 00:17:34.267 "adrfam": "IPv4", 00:17:34.267 "traddr": "10.0.0.1", 00:17:34.267 "trsvcid": "42008" 00:17:34.267 }, 00:17:34.267 "auth": { 00:17:34.267 "state": "completed", 00:17:34.267 "digest": "sha512", 00:17:34.267 "dhgroup": "ffdhe6144" 00:17:34.267 } 00:17:34.267 } 00:17:34.267 ]' 00:17:34.267 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:34.525 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.525 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:34.525 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.525 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:34.525 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.525 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.525 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.784 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:17:35.718 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.718 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.718 00:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.718 00:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.718 00:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.718 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:35.718 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:35.718 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:35.975 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:17:35.975 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:35.975 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.975 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:35.975 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.975 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:35.975 00:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.976 00:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.976 00:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.976 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.976 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.569 00:17:36.569 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:36.569 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:36.569 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:36.827 { 00:17:36.827 "cntlid": 135, 00:17:36.827 "qid": 0, 00:17:36.827 "state": "enabled", 00:17:36.827 "listen_address": { 00:17:36.827 "trtype": "TCP", 00:17:36.827 "adrfam": "IPv4", 00:17:36.827 "traddr": "10.0.0.2", 00:17:36.827 "trsvcid": "4420" 00:17:36.827 }, 00:17:36.827 "peer_address": { 00:17:36.827 "trtype": "TCP", 00:17:36.827 "adrfam": "IPv4", 00:17:36.827 "traddr": "10.0.0.1", 00:17:36.827 "trsvcid": "49256" 00:17:36.827 }, 00:17:36.827 "auth": { 00:17:36.827 "state": "completed", 00:17:36.827 "digest": "sha512", 00:17:36.827 "dhgroup": "ffdhe6144" 00:17:36.827 } 00:17:36.827 } 00:17:36.827 ]' 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.827 00:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.085 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:17:38.020 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.020 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.020 00:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.020 00:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.020 00:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.020 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.020 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:38.020 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.020 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.277 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:17:38.277 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:38.277 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:38.277 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:38.277 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:38.278 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:17:38.278 00:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.278 00:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.278 00:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.278 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:38.278 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:39.208 00:17:39.208 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:39.208 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:39.208 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:39.467 { 00:17:39.467 "cntlid": 137, 00:17:39.467 "qid": 0, 00:17:39.467 "state": "enabled", 00:17:39.467 "listen_address": { 00:17:39.467 "trtype": "TCP", 00:17:39.467 "adrfam": "IPv4", 00:17:39.467 "traddr": "10.0.0.2", 00:17:39.467 "trsvcid": "4420" 00:17:39.467 }, 00:17:39.467 "peer_address": { 00:17:39.467 "trtype": "TCP", 00:17:39.467 "adrfam": "IPv4", 00:17:39.467 "traddr": "10.0.0.1", 00:17:39.467 "trsvcid": "49288" 00:17:39.467 }, 00:17:39.467 "auth": { 00:17:39.467 "state": "completed", 00:17:39.467 "digest": "sha512", 00:17:39.467 "dhgroup": "ffdhe8192" 00:17:39.467 } 00:17:39.467 } 00:17:39.467 ]' 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.467 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.725 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:17:41.099 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.099 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:41.099 00:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.099 00:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.099 00:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.099 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:41.099 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.099 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.099 00:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:17:41.099 00:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:41.099 00:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.099 00:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:41.099 00:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.099 00:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:41.099 00:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.099 00:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.099 00:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.099 00:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:41.099 00:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:42.033 00:17:42.033 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:42.033 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:42.033 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.290 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.290 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.290 00:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.290 00:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.290 00:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.290 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:42.290 { 00:17:42.290 "cntlid": 139, 00:17:42.290 "qid": 0, 00:17:42.290 "state": "enabled", 00:17:42.290 "listen_address": { 00:17:42.290 "trtype": "TCP", 00:17:42.290 "adrfam": "IPv4", 00:17:42.290 "traddr": "10.0.0.2", 00:17:42.290 "trsvcid": "4420" 00:17:42.290 }, 00:17:42.290 "peer_address": { 00:17:42.290 "trtype": "TCP", 00:17:42.290 "adrfam": "IPv4", 00:17:42.290 "traddr": "10.0.0.1", 00:17:42.290 "trsvcid": "49320" 00:17:42.291 }, 00:17:42.291 "auth": { 00:17:42.291 "state": "completed", 00:17:42.291 "digest": "sha512", 00:17:42.291 "dhgroup": "ffdhe8192" 00:17:42.291 } 00:17:42.291 } 00:17:42.291 ]' 00:17:42.291 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:42.291 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.291 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:42.291 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.291 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:42.291 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.291 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.291 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.549 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2E3MDg2NTQyZWQwYzk0ZDcwNmM4OWNmYjc0MDFlN2KHjlrk: 00:17:43.482 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.482 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.482 00:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.483 00:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.483 00:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.483 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:43.483 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.483 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.740 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:17:43.740 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:43.740 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.740 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:43.740 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:43.740 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 00:17:43.741 00:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.741 00:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.741 00:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.741 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:43.741 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:44.674 00:17:44.674 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:44.675 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:44.675 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.932 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.932 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.932 00:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.932 00:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.932 00:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.932 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:44.932 { 00:17:44.932 "cntlid": 141, 00:17:44.932 "qid": 0, 00:17:44.932 "state": "enabled", 00:17:44.932 "listen_address": { 00:17:44.932 "trtype": "TCP", 00:17:44.932 "adrfam": "IPv4", 00:17:44.932 "traddr": "10.0.0.2", 00:17:44.932 "trsvcid": "4420" 00:17:44.932 }, 00:17:44.932 "peer_address": { 00:17:44.932 "trtype": "TCP", 00:17:44.932 "adrfam": "IPv4", 00:17:44.933 "traddr": "10.0.0.1", 00:17:44.933 "trsvcid": "49344" 00:17:44.933 }, 00:17:44.933 "auth": { 00:17:44.933 "state": "completed", 00:17:44.933 "digest": "sha512", 00:17:44.933 "dhgroup": "ffdhe8192" 00:17:44.933 } 00:17:44.933 } 00:17:44.933 ]' 00:17:44.933 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:44.933 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.933 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:44.933 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.933 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:44.933 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.933 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.933 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.191 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2QxZjcyOWUwODcwNmRiZTZiNmQ2NmVkM2MyMzQwZmZlZmViYjczNGFmYTcwNDhh4PyRaQ==: 00:17:46.125 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.125 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:46.125 00:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.125 00:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.125 00:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.125 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:46.125 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:46.125 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:46.382 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:17:46.382 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:46.382 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.382 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:46.382 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:46.382 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:46.382 00:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.382 00:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.382 00:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.382 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.382 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.316 00:17:47.316 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:47.316 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:47.316 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.573 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.573 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.573 00:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.573 00:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.573 00:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.573 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:47.573 { 00:17:47.573 "cntlid": 143, 00:17:47.573 "qid": 0, 00:17:47.573 "state": "enabled", 00:17:47.573 "listen_address": { 00:17:47.573 "trtype": "TCP", 00:17:47.573 "adrfam": "IPv4", 00:17:47.573 "traddr": "10.0.0.2", 00:17:47.573 "trsvcid": "4420" 00:17:47.573 }, 00:17:47.573 "peer_address": { 00:17:47.573 "trtype": "TCP", 00:17:47.573 "adrfam": "IPv4", 00:17:47.573 "traddr": "10.0.0.1", 00:17:47.573 "trsvcid": "47208" 00:17:47.573 }, 00:17:47.573 "auth": { 00:17:47.573 "state": "completed", 00:17:47.573 "digest": "sha512", 00:17:47.573 "dhgroup": "ffdhe8192" 00:17:47.573 } 00:17:47.573 } 00:17:47.573 ]' 00:17:47.573 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:47.573 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.573 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:47.573 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.573 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:47.574 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.574 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.574 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.832 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NDBiMDU1MGE2MTFlMjE2NzRkODVhYThhZWVjMDczOTg5MTdhNDQ3ODVhYWVhYzFkNmE5YjE4YWI2YmY1ZWM4YuezEso=: 00:17:48.765 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.766 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:48.766 00:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.766 00:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.766 00:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.766 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:17:48.766 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:17:48.766 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:17:48.766 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:48.766 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:48.766 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:49.023 00:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:17:49.023 00:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:49.023 00:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.023 00:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:49.023 00:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.023 00:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 00:17:49.023 00:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.023 00:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.023 00:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.023 00:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:49.023 00:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:49.956 00:17:49.956 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:49.956 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:49.956 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.215 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.215 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.215 00:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.215 00:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.215 00:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.215 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:50.215 { 00:17:50.215 "cntlid": 145, 00:17:50.215 "qid": 0, 00:17:50.215 "state": "enabled", 00:17:50.215 "listen_address": { 00:17:50.215 "trtype": "TCP", 00:17:50.215 "adrfam": "IPv4", 00:17:50.215 "traddr": "10.0.0.2", 00:17:50.215 "trsvcid": "4420" 00:17:50.215 }, 00:17:50.215 "peer_address": { 00:17:50.215 "trtype": "TCP", 00:17:50.215 "adrfam": "IPv4", 00:17:50.215 "traddr": "10.0.0.1", 00:17:50.215 "trsvcid": "47250" 00:17:50.215 }, 00:17:50.215 "auth": { 00:17:50.215 "state": "completed", 00:17:50.215 "digest": "sha512", 00:17:50.215 "dhgroup": "ffdhe8192" 00:17:50.215 } 00:17:50.215 } 00:17:50.215 ]' 00:17:50.215 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:50.215 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.215 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:50.215 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.215 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:50.473 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.473 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.473 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.731 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NGE4N2I0MTdjN2I4NGQ3NzY3MzNkYTAwMmFmZWVlN2E1MWY2YWE4YmJjYjA4NzQymkxXig==: 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.686 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:52.621 request: 00:17:52.621 { 00:17:52.621 "name": "nvme0", 00:17:52.621 "trtype": "tcp", 00:17:52.621 "traddr": "10.0.0.2", 00:17:52.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:52.621 "adrfam": "ipv4", 00:17:52.621 "trsvcid": "4420", 00:17:52.621 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.621 "dhchap_key": "key2", 00:17:52.621 "method": "bdev_nvme_attach_controller", 00:17:52.621 "req_id": 1 00:17:52.621 } 00:17:52.621 Got JSON-RPC error response 00:17:52.621 response: 00:17:52.621 { 00:17:52.621 "code": -32602, 00:17:52.621 "message": "Invalid parameters" 00:17:52.621 } 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 872729 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 872729 ']' 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 872729 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 872729 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 872729' 00:17:52.621 killing process with pid 872729 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 872729 00:17:52.621 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 872729 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.187 rmmod nvme_tcp 00:17:53.187 rmmod nvme_fabrics 00:17:53.187 rmmod nvme_keyring 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 872578 ']' 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 872578 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 872578 ']' 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 872578 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 872578 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 872578' 00:17:53.187 killing process with pid 872578 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 872578 00:17:53.187 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 872578 00:17:53.447 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.447 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.447 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.448 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.448 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.448 00:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.448 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.448 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.355 00:33:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.355 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.D7q /tmp/spdk.key-sha256.McA /tmp/spdk.key-sha384.iLh /tmp/spdk.key-sha512.E99 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:55.355 00:17:55.355 real 2m59.488s 00:17:55.355 user 6m55.655s 00:17:55.355 sys 0m21.636s 00:17:55.355 00:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:55.355 00:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.355 ************************************ 00:17:55.355 END TEST nvmf_auth_target 00:17:55.355 ************************************ 00:17:55.355 00:33:21 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:55.355 00:33:21 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:55.355 00:33:21 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:17:55.355 00:33:21 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:55.355 00:33:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.614 ************************************ 00:17:55.614 START TEST nvmf_bdevio_no_huge 00:17:55.614 ************************************ 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:55.614 * Looking for test storage... 00:17:55.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.614 00:33:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.143 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.143 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:58.143 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:58.143 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:58.143 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:58.143 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:58.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:58.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:58.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:58.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:58.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:17:58.144 00:17:58.144 --- 10.0.0.2 ping statistics --- 00:17:58.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.144 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:17:58.144 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:17:58.144 00:17:58.144 --- 10.0.0.1 ping statistics --- 00:17:58.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.145 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=896823 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 896823 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # '[' -z 896823 ']' 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:58.145 00:33:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.145 [2024-05-15 00:33:24.253096] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:58.145 [2024-05-15 00:33:24.253196] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:58.403 [2024-05-15 00:33:24.344142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.403 [2024-05-15 00:33:24.471250] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.403 [2024-05-15 00:33:24.471311] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.403 [2024-05-15 00:33:24.471327] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.403 [2024-05-15 00:33:24.471340] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.403 [2024-05-15 00:33:24.471352] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.403 [2024-05-15 00:33:24.471438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:58.403 [2024-05-15 00:33:24.471496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:58.403 [2024-05-15 00:33:24.471546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:58.403 [2024-05-15 00:33:24.471549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # return 0 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.335 [2024-05-15 00:33:25.262853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.335 Malloc0 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.335 [2024-05-15 00:33:25.300485] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:59.335 [2024-05-15 00:33:25.300739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:59.335 { 00:17:59.335 "params": { 00:17:59.335 "name": "Nvme$subsystem", 00:17:59.335 "trtype": "$TEST_TRANSPORT", 00:17:59.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.335 "adrfam": "ipv4", 00:17:59.335 "trsvcid": "$NVMF_PORT", 00:17:59.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.335 "hdgst": ${hdgst:-false}, 00:17:59.335 "ddgst": ${ddgst:-false} 00:17:59.335 }, 00:17:59.335 "method": "bdev_nvme_attach_controller" 00:17:59.335 } 00:17:59.335 EOF 00:17:59.335 )") 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:59.335 00:33:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:59.335 "params": { 00:17:59.335 "name": "Nvme1", 00:17:59.335 "trtype": "tcp", 00:17:59.335 "traddr": "10.0.0.2", 00:17:59.335 "adrfam": "ipv4", 00:17:59.335 "trsvcid": "4420", 00:17:59.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.335 "hdgst": false, 00:17:59.335 "ddgst": false 00:17:59.335 }, 00:17:59.335 "method": "bdev_nvme_attach_controller" 00:17:59.335 }' 00:17:59.335 [2024-05-15 00:33:25.342471] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:59.335 [2024-05-15 00:33:25.342548] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid896979 ] 00:17:59.335 [2024-05-15 00:33:25.417117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:59.593 [2024-05-15 00:33:25.533492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.593 [2024-05-15 00:33:25.533545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.593 [2024-05-15 00:33:25.533548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.850 I/O targets: 00:17:59.850 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:59.850 00:17:59.850 00:17:59.850 CUnit - A unit testing framework for C - Version 2.1-3 00:17:59.851 http://cunit.sourceforge.net/ 00:17:59.851 00:17:59.851 00:17:59.851 Suite: bdevio tests on: Nvme1n1 00:17:59.851 Test: blockdev write read block ...passed 00:17:59.851 Test: blockdev write zeroes read block ...passed 00:17:59.851 Test: blockdev write zeroes read no split ...passed 00:17:59.851 Test: blockdev write zeroes read split ...passed 00:18:00.108 Test: blockdev write zeroes read split partial ...passed 00:18:00.108 Test: blockdev reset ...[2024-05-15 00:33:26.042431] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:00.108 [2024-05-15 00:33:26.042539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a9340 (9): Bad file descriptor 00:18:00.108 [2024-05-15 00:33:26.095894] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:00.108 passed 00:18:00.108 Test: blockdev write read 8 blocks ...passed 00:18:00.108 Test: blockdev write read size > 128k ...passed 00:18:00.108 Test: blockdev write read invalid size ...passed 00:18:00.108 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:00.108 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:00.108 Test: blockdev write read max offset ...passed 00:18:00.108 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:00.366 Test: blockdev writev readv 8 blocks ...passed 00:18:00.366 Test: blockdev writev readv 30 x 1block ...passed 00:18:00.366 Test: blockdev writev readv block ...passed 00:18:00.366 Test: blockdev writev readv size > 128k ...passed 00:18:00.366 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:00.366 Test: blockdev comparev and writev ...[2024-05-15 00:33:26.398156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.366 [2024-05-15 00:33:26.398195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.366 [2024-05-15 00:33:26.398220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.366 [2024-05-15 00:33:26.398237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.366 [2024-05-15 00:33:26.398611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.366 [2024-05-15 00:33:26.398634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:00.366 [2024-05-15 00:33:26.398655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.366 [2024-05-15 00:33:26.398671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:00.366 [2024-05-15 00:33:26.399045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.366 [2024-05-15 00:33:26.399070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:00.366 [2024-05-15 00:33:26.399091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.366 [2024-05-15 00:33:26.399107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:00.366 [2024-05-15 00:33:26.399476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.366 [2024-05-15 00:33:26.399499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:00.366 [2024-05-15 00:33:26.399520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.366 [2024-05-15 00:33:26.399536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:00.366 passed 00:18:00.366 Test: blockdev nvme passthru rw ...passed 00:18:00.366 Test: blockdev nvme passthru vendor specific ...[2024-05-15 00:33:26.482322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:00.366 [2024-05-15 00:33:26.482351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:00.366 [2024-05-15 00:33:26.482570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:00.366 [2024-05-15 00:33:26.482594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:00.366 [2024-05-15 00:33:26.482813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:00.366 [2024-05-15 00:33:26.482836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:00.366 [2024-05-15 00:33:26.483056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:00.366 [2024-05-15 00:33:26.483079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:00.366 passed 00:18:00.366 Test: blockdev nvme admin passthru ...passed 00:18:00.624 Test: blockdev copy ...passed 00:18:00.624 00:18:00.624 Run Summary: Type Total Ran Passed Failed Inactive 00:18:00.624 suites 1 1 n/a 0 0 00:18:00.624 tests 23 23 23 0 0 00:18:00.624 asserts 152 152 152 0 n/a 00:18:00.624 00:18:00.624 Elapsed time = 1.432 seconds 00:18:00.881 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:00.882 rmmod nvme_tcp 00:18:00.882 rmmod nvme_fabrics 00:18:00.882 rmmod nvme_keyring 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 896823 ']' 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 896823 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' -z 896823 ']' 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # kill -0 896823 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # uname 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 896823 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # echo 'killing process with pid 896823' 00:18:00.882 killing process with pid 896823 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # kill 896823 00:18:00.882 [2024-05-15 00:33:26.977548] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:00.882 00:33:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # wait 896823 00:18:01.448 00:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:01.448 00:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:01.448 00:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:01.448 00:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:01.448 00:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:01.448 00:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.448 00:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.448 00:33:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.351 00:33:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:03.351 00:18:03.351 real 0m7.925s 00:18:03.351 user 0m14.994s 00:18:03.351 sys 0m2.965s 00:18:03.351 00:33:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:03.351 00:33:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:03.351 ************************************ 00:18:03.351 END TEST nvmf_bdevio_no_huge 00:18:03.351 ************************************ 00:18:03.351 00:33:29 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:03.351 00:33:29 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:03.351 00:33:29 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:03.351 00:33:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:03.351 ************************************ 00:18:03.351 START TEST nvmf_tls 00:18:03.351 ************************************ 00:18:03.351 00:33:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:03.609 * Looking for test storage... 00:18:03.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:03.609 00:33:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:06.138 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:06.138 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:06.138 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:06.138 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:06.138 00:33:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:06.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:18:06.138 00:18:06.138 --- 10.0.0.2 ping statistics --- 00:18:06.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.138 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:18:06.138 00:18:06.138 --- 10.0.0.1 ping statistics --- 00:18:06.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.138 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:06.138 00:33:32 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=899463 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 899463 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 899463 ']' 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:06.139 00:33:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.139 [2024-05-15 00:33:32.122804] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:06.139 [2024-05-15 00:33:32.122881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.139 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.139 [2024-05-15 00:33:32.198397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.397 [2024-05-15 00:33:32.313481] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.397 [2024-05-15 00:33:32.313541] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.397 [2024-05-15 00:33:32.313557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.397 [2024-05-15 00:33:32.313571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.397 [2024-05-15 00:33:32.313582] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.397 [2024-05-15 00:33:32.313615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.963 00:33:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:06.963 00:33:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:06.963 00:33:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:06.963 00:33:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:06.963 00:33:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.963 00:33:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.963 00:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:06.963 00:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:07.240 true 00:18:07.240 00:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:07.240 00:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:07.512 00:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:07.512 00:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:07.512 00:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:07.770 00:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:07.770 00:33:33 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:08.028 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:08.028 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:08.028 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:08.287 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:08.287 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:08.545 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:08.545 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:08.545 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:08.545 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:08.804 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:08.804 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:08.804 00:33:34 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:09.061 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:09.061 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:09.319 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:09.319 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:09.319 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:09.577 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:09.577 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.82GR4ukP44 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.skwLHz1jhN 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.82GR4ukP44 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.skwLHz1jhN 00:18:09.836 00:33:35 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:10.094 00:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:10.661 00:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.82GR4ukP44 00:18:10.661 00:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.82GR4ukP44 00:18:10.661 00:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:10.661 [2024-05-15 00:33:36.822957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.920 00:33:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:10.920 00:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:11.178 [2024-05-15 00:33:37.332305] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:11.178 [2024-05-15 00:33:37.332396] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:11.178 [2024-05-15 00:33:37.332624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.437 00:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:11.437 malloc0 00:18:11.695 00:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:11.695 00:33:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.82GR4ukP44 00:18:11.954 [2024-05-15 00:33:38.082686] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:11.954 00:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.82GR4ukP44 00:18:12.212 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.193 Initializing NVMe Controllers 00:18:22.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:22.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:22.193 Initialization complete. Launching workers. 00:18:22.193 ======================================================== 00:18:22.193 Latency(us) 00:18:22.193 Device Information : IOPS MiB/s Average min max 00:18:22.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7743.47 30.25 8267.76 1280.33 9210.33 00:18:22.193 ======================================================== 00:18:22.193 Total : 7743.47 30.25 8267.76 1280.33 9210.33 00:18:22.193 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.82GR4ukP44 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.82GR4ukP44' 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=901366 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 901366 /var/tmp/bdevperf.sock 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 901366 ']' 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:22.193 00:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.193 [2024-05-15 00:33:48.245542] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:22.193 [2024-05-15 00:33:48.245626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901366 ] 00:18:22.193 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.193 [2024-05-15 00:33:48.312260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.451 [2024-05-15 00:33:48.418894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.451 00:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:22.451 00:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:22.451 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.82GR4ukP44 00:18:22.709 [2024-05-15 00:33:48.744796] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:22.709 [2024-05-15 00:33:48.744902] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:22.709 TLSTESTn1 00:18:22.709 00:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:22.967 Running I/O for 10 seconds... 00:18:32.928 00:18:32.928 Latency(us) 00:18:32.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.928 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:32.928 Verification LBA range: start 0x0 length 0x2000 00:18:32.928 TLSTESTn1 : 10.08 1337.20 5.22 0.00 0.00 95395.10 8835.22 134373.07 00:18:32.929 =================================================================================================================== 00:18:32.929 Total : 1337.20 5.22 0.00 0.00 95395.10 8835.22 134373.07 00:18:32.929 0 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 901366 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 901366 ']' 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 901366 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 901366 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 901366' 00:18:32.929 killing process with pid 901366 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 901366 00:18:32.929 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.929 00:18:32.929 Latency(us) 00:18:32.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.929 =================================================================================================================== 00:18:32.929 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.929 [2024-05-15 00:33:59.073414] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:32.929 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 901366 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.skwLHz1jhN 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.skwLHz1jhN 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.skwLHz1jhN 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.skwLHz1jhN' 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=902681 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.235 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.236 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 902681 /var/tmp/bdevperf.sock 00:18:33.236 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 902681 ']' 00:18:33.236 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.236 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:33.236 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.236 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:33.236 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.236 [2024-05-15 00:33:59.385810] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:33.236 [2024-05-15 00:33:59.385902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902681 ] 00:18:33.494 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.494 [2024-05-15 00:33:59.464484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.494 [2024-05-15 00:33:59.575282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.776 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:33.776 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:33.776 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.skwLHz1jhN 00:18:33.776 [2024-05-15 00:33:59.920849] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.776 [2024-05-15 00:33:59.920978] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:34.033 [2024-05-15 00:33:59.926502] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:34.034 [2024-05-15 00:33:59.926841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70d130 (107): Transport endpoint is not connected 00:18:34.034 [2024-05-15 00:33:59.927828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70d130 (9): Bad file descriptor 00:18:34.034 [2024-05-15 00:33:59.928827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.034 [2024-05-15 00:33:59.928848] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:34.034 [2024-05-15 00:33:59.928880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.034 request: 00:18:34.034 { 00:18:34.034 "name": "TLSTEST", 00:18:34.034 "trtype": "tcp", 00:18:34.034 "traddr": "10.0.0.2", 00:18:34.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.034 "adrfam": "ipv4", 00:18:34.034 "trsvcid": "4420", 00:18:34.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.034 "psk": "/tmp/tmp.skwLHz1jhN", 00:18:34.034 "method": "bdev_nvme_attach_controller", 00:18:34.034 "req_id": 1 00:18:34.034 } 00:18:34.034 Got JSON-RPC error response 00:18:34.034 response: 00:18:34.034 { 00:18:34.034 "code": -32602, 00:18:34.034 "message": "Invalid parameters" 00:18:34.034 } 00:18:34.034 00:33:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 902681 00:18:34.034 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 902681 ']' 00:18:34.034 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 902681 00:18:34.034 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:34.034 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:34.034 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 902681 00:18:34.034 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:34.034 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:34.034 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 902681' 00:18:34.034 killing process with pid 902681 00:18:34.034 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 902681 00:18:34.034 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.034 00:18:34.034 Latency(us) 00:18:34.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.034 =================================================================================================================== 00:18:34.034 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:34.034 [2024-05-15 00:33:59.980536] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:34.034 00:33:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 902681 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.82GR4ukP44 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.82GR4ukP44 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.82GR4ukP44 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.82GR4ukP44' 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=902770 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 902770 /var/tmp/bdevperf.sock 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 902770 ']' 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:34.292 00:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.292 [2024-05-15 00:34:00.285852] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:34.292 [2024-05-15 00:34:00.285966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902770 ] 00:18:34.292 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.292 [2024-05-15 00:34:00.362801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.550 [2024-05-15 00:34:00.476185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.82GR4ukP44 00:18:35.483 [2024-05-15 00:34:01.560843] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.483 [2024-05-15 00:34:01.560983] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:35.483 [2024-05-15 00:34:01.570658] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:35.483 [2024-05-15 00:34:01.570700] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:35.483 [2024-05-15 00:34:01.570756] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:35.483 [2024-05-15 00:34:01.570924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebe130 (107): Transport endpoint is not connected 00:18:35.483 [2024-05-15 00:34:01.571887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebe130 (9): Bad file descriptor 00:18:35.483 [2024-05-15 00:34:01.572887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.483 [2024-05-15 00:34:01.572907] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:35.483 [2024-05-15 00:34:01.572946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.483 request: 00:18:35.483 { 00:18:35.483 "name": "TLSTEST", 00:18:35.483 "trtype": "tcp", 00:18:35.483 "traddr": "10.0.0.2", 00:18:35.483 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:35.483 "adrfam": "ipv4", 00:18:35.483 "trsvcid": "4420", 00:18:35.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.483 "psk": "/tmp/tmp.82GR4ukP44", 00:18:35.483 "method": "bdev_nvme_attach_controller", 00:18:35.483 "req_id": 1 00:18:35.483 } 00:18:35.483 Got JSON-RPC error response 00:18:35.483 response: 00:18:35.483 { 00:18:35.483 "code": -32602, 00:18:35.483 "message": "Invalid parameters" 00:18:35.483 } 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 902770 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 902770 ']' 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 902770 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 902770 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 902770' 00:18:35.483 killing process with pid 902770 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 902770 00:18:35.483 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.483 00:18:35.483 Latency(us) 00:18:35.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.483 =================================================================================================================== 00:18:35.483 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.483 [2024-05-15 00:34:01.625338] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:35.483 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 902770 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.82GR4ukP44 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.82GR4ukP44 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.82GR4ukP44 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.82GR4ukP44' 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=902962 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 902962 /var/tmp/bdevperf.sock 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 902962 ']' 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:35.741 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.742 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:35.742 00:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.000 [2024-05-15 00:34:01.932378] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:36.000 [2024-05-15 00:34:01.932455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902962 ] 00:18:36.000 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.000 [2024-05-15 00:34:02.000519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.000 [2024-05-15 00:34:02.104576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.258 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:36.258 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:36.258 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.82GR4ukP44 00:18:36.516 [2024-05-15 00:34:02.455675] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.516 [2024-05-15 00:34:02.455795] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:36.516 [2024-05-15 00:34:02.462900] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:36.516 [2024-05-15 00:34:02.462956] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:36.516 [2024-05-15 00:34:02.463012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:36.516 [2024-05-15 00:34:02.463867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbda130 (107): Transport endpoint is not connected 00:18:36.516 [2024-05-15 00:34:02.464857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbda130 (9): Bad file descriptor 00:18:36.516 [2024-05-15 00:34:02.465857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:36.516 [2024-05-15 00:34:02.465878] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:36.516 [2024-05-15 00:34:02.465910] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:36.516 request: 00:18:36.516 { 00:18:36.516 "name": "TLSTEST", 00:18:36.516 "trtype": "tcp", 00:18:36.516 "traddr": "10.0.0.2", 00:18:36.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.516 "adrfam": "ipv4", 00:18:36.516 "trsvcid": "4420", 00:18:36.516 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:36.516 "psk": "/tmp/tmp.82GR4ukP44", 00:18:36.516 "method": "bdev_nvme_attach_controller", 00:18:36.516 "req_id": 1 00:18:36.516 } 00:18:36.516 Got JSON-RPC error response 00:18:36.516 response: 00:18:36.516 { 00:18:36.516 "code": -32602, 00:18:36.516 "message": "Invalid parameters" 00:18:36.516 } 00:18:36.516 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 902962 00:18:36.516 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 902962 ']' 00:18:36.516 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 902962 00:18:36.516 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:36.516 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:36.516 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 902962 00:18:36.516 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:36.516 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:36.516 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 902962' 00:18:36.516 killing process with pid 902962 00:18:36.516 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 902962 00:18:36.516 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.516 00:18:36.516 Latency(us) 00:18:36.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.516 =================================================================================================================== 00:18:36.516 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.516 [2024-05-15 00:34:02.506685] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:36.516 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 902962 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=903104 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 903104 /var/tmp/bdevperf.sock 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 903104 ']' 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:36.774 00:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.774 [2024-05-15 00:34:02.784044] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:36.774 [2024-05-15 00:34:02.784127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903104 ] 00:18:36.774 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.774 [2024-05-15 00:34:02.854821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.032 [2024-05-15 00:34:02.969724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.032 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:37.032 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:37.032 00:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:37.289 [2024-05-15 00:34:03.325241] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:37.289 [2024-05-15 00:34:03.326897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285ab0 (9): Bad file descriptor 00:18:37.289 [2024-05-15 00:34:03.327892] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:37.289 [2024-05-15 00:34:03.327926] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:37.290 [2024-05-15 00:34:03.327949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:37.290 request: 00:18:37.290 { 00:18:37.290 "name": "TLSTEST", 00:18:37.290 "trtype": "tcp", 00:18:37.290 "traddr": "10.0.0.2", 00:18:37.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.290 "adrfam": "ipv4", 00:18:37.290 "trsvcid": "4420", 00:18:37.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.290 "method": "bdev_nvme_attach_controller", 00:18:37.290 "req_id": 1 00:18:37.290 } 00:18:37.290 Got JSON-RPC error response 00:18:37.290 response: 00:18:37.290 { 00:18:37.290 "code": -32602, 00:18:37.290 "message": "Invalid parameters" 00:18:37.290 } 00:18:37.290 00:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 903104 00:18:37.290 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 903104 ']' 00:18:37.290 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 903104 00:18:37.290 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:37.290 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:37.290 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 903104 00:18:37.290 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:37.290 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:37.290 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 903104' 00:18:37.290 killing process with pid 903104 00:18:37.290 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 903104 00:18:37.290 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.290 00:18:37.290 Latency(us) 00:18:37.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.290 =================================================================================================================== 00:18:37.290 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.290 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 903104 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 899463 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 899463 ']' 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 899463 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 899463 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 899463' 00:18:37.547 killing process with pid 899463 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 899463 00:18:37.547 [2024-05-15 00:34:03.665083] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:37.547 [2024-05-15 00:34:03.665137] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:37.547 00:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 899463 00:18:37.804 00:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:37.804 00:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:37.804 00:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:37.804 00:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:37.804 00:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:37.804 00:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:37.804 00:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.N6KAhddIiN 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.N6KAhddIiN 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=903261 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 903261 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 903261 ']' 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:38.062 00:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.062 [2024-05-15 00:34:04.068544] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:38.062 [2024-05-15 00:34:04.068624] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.062 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.062 [2024-05-15 00:34:04.141706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.320 [2024-05-15 00:34:04.247441] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.320 [2024-05-15 00:34:04.247503] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.320 [2024-05-15 00:34:04.247531] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.320 [2024-05-15 00:34:04.247543] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.320 [2024-05-15 00:34:04.247553] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.320 [2024-05-15 00:34:04.247579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.255 00:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:39.255 00:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:39.255 00:34:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.255 00:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:39.255 00:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.255 00:34:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.255 00:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.N6KAhddIiN 00:18:39.255 00:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.N6KAhddIiN 00:18:39.255 00:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:39.256 [2024-05-15 00:34:05.349511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.256 00:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:39.513 00:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:39.771 [2024-05-15 00:34:05.850814] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:39.771 [2024-05-15 00:34:05.850972] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:39.772 [2024-05-15 00:34:05.851173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.772 00:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:40.029 malloc0 00:18:40.030 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:40.287 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N6KAhddIiN 00:18:40.546 [2024-05-15 00:34:06.692799] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N6KAhddIiN 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.N6KAhddIiN' 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=903553 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 903553 /var/tmp/bdevperf.sock 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 903553 ']' 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:40.805 00:34:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.805 [2024-05-15 00:34:06.758789] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:40.805 [2024-05-15 00:34:06.758862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903553 ] 00:18:40.805 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.805 [2024-05-15 00:34:06.825158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.805 [2024-05-15 00:34:06.929457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.063 00:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:41.063 00:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:41.063 00:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N6KAhddIiN 00:18:41.321 [2024-05-15 00:34:07.264367] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.321 [2024-05-15 00:34:07.264480] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:41.321 TLSTESTn1 00:18:41.321 00:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:41.321 Running I/O for 10 seconds... 00:18:53.515 00:18:53.515 Latency(us) 00:18:53.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.515 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:53.515 Verification LBA range: start 0x0 length 0x2000 00:18:53.515 TLSTESTn1 : 10.03 934.66 3.65 0.00 0.00 136659.74 8592.50 127382.57 00:18:53.515 =================================================================================================================== 00:18:53.515 Total : 934.66 3.65 0.00 0.00 136659.74 8592.50 127382.57 00:18:53.515 0 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 903553 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 903553 ']' 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 903553 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 903553 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 903553' 00:18:53.515 killing process with pid 903553 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 903553 00:18:53.515 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.515 00:18:53.515 Latency(us) 00:18:53.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.515 =================================================================================================================== 00:18:53.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.515 [2024-05-15 00:34:17.544792] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 903553 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.N6KAhddIiN 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N6KAhddIiN 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N6KAhddIiN 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N6KAhddIiN 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.N6KAhddIiN' 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=904867 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 904867 /var/tmp/bdevperf.sock 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 904867 ']' 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:53.515 00:34:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.515 [2024-05-15 00:34:17.829001] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:53.515 [2024-05-15 00:34:17.829083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904867 ] 00:18:53.515 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.515 [2024-05-15 00:34:17.897709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.515 [2024-05-15 00:34:18.005545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N6KAhddIiN 00:18:53.515 [2024-05-15 00:34:18.328753] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.515 [2024-05-15 00:34:18.328840] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:53.515 [2024-05-15 00:34:18.328854] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.N6KAhddIiN 00:18:53.515 request: 00:18:53.515 { 00:18:53.515 "name": "TLSTEST", 00:18:53.515 "trtype": "tcp", 00:18:53.515 "traddr": "10.0.0.2", 00:18:53.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.515 "adrfam": "ipv4", 00:18:53.515 "trsvcid": "4420", 00:18:53.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.515 "psk": "/tmp/tmp.N6KAhddIiN", 00:18:53.515 "method": "bdev_nvme_attach_controller", 00:18:53.515 "req_id": 1 00:18:53.515 } 00:18:53.515 Got JSON-RPC error response 00:18:53.515 response: 00:18:53.515 { 00:18:53.515 "code": -1, 00:18:53.515 "message": "Operation not permitted" 00:18:53.515 } 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 904867 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 904867 ']' 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 904867 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 904867 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 904867' 00:18:53.515 killing process with pid 904867 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 904867 00:18:53.515 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.515 00:18:53.515 Latency(us) 00:18:53.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.515 =================================================================================================================== 00:18:53.515 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 904867 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 903261 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 903261 ']' 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 903261 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 903261 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:53.515 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 903261' 00:18:53.516 killing process with pid 903261 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 903261 00:18:53.516 [2024-05-15 00:34:18.660738] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:53.516 [2024-05-15 00:34:18.660812] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 903261 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=905009 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 905009 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 905009 ']' 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:53.516 00:34:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.516 [2024-05-15 00:34:19.009825] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:53.516 [2024-05-15 00:34:19.009911] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.516 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.516 [2024-05-15 00:34:19.098150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.516 [2024-05-15 00:34:19.221760] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.516 [2024-05-15 00:34:19.221815] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.516 [2024-05-15 00:34:19.221831] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.516 [2024-05-15 00:34:19.221845] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.516 [2024-05-15 00:34:19.221857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.516 [2024-05-15 00:34:19.221894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.N6KAhddIiN 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.N6KAhddIiN 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.N6KAhddIiN 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.N6KAhddIiN 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:53.516 [2024-05-15 00:34:19.647689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.516 00:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:54.087 00:34:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:54.087 [2024-05-15 00:34:20.233210] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:54.087 [2024-05-15 00:34:20.233334] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.087 [2024-05-15 00:34:20.233555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.344 00:34:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:54.602 malloc0 00:18:54.602 00:34:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:54.867 00:34:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N6KAhddIiN 00:18:54.867 [2024-05-15 00:34:21.007119] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:54.867 [2024-05-15 00:34:21.007168] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:54.867 [2024-05-15 00:34:21.007207] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:54.867 request: 00:18:54.867 { 00:18:54.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.867 "host": "nqn.2016-06.io.spdk:host1", 00:18:54.867 "psk": "/tmp/tmp.N6KAhddIiN", 00:18:54.867 "method": "nvmf_subsystem_add_host", 00:18:54.867 "req_id": 1 00:18:54.867 } 00:18:54.867 Got JSON-RPC error response 00:18:54.867 response: 00:18:54.867 { 00:18:54.867 "code": -32603, 00:18:54.867 "message": "Internal error" 00:18:54.867 } 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 905009 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 905009 ']' 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 905009 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 905009 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 905009' 00:18:55.161 killing process with pid 905009 00:18:55.161 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 905009 00:18:55.161 [2024-05-15 00:34:21.061462] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:55.162 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 905009 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.N6KAhddIiN 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=905315 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 905315 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 905315 ']' 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:55.426 00:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.426 [2024-05-15 00:34:21.416577] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:55.426 [2024-05-15 00:34:21.416660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.426 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.426 [2024-05-15 00:34:21.494812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.684 [2024-05-15 00:34:21.612735] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.684 [2024-05-15 00:34:21.612808] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.684 [2024-05-15 00:34:21.612824] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.684 [2024-05-15 00:34:21.612838] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.684 [2024-05-15 00:34:21.612849] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.684 [2024-05-15 00:34:21.612882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.250 00:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:56.250 00:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:56.250 00:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.250 00:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:56.250 00:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.508 00:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.508 00:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.N6KAhddIiN 00:18:56.508 00:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.N6KAhddIiN 00:18:56.508 00:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:56.765 [2024-05-15 00:34:22.688147] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.765 00:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:57.023 00:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:57.282 [2024-05-15 00:34:23.189465] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:57.282 [2024-05-15 00:34:23.189579] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.282 [2024-05-15 00:34:23.189793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.282 00:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:57.538 malloc0 00:18:57.538 00:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:57.795 00:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N6KAhddIiN 00:18:58.053 [2024-05-15 00:34:24.035647] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:58.053 00:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=905609 00:18:58.053 00:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.053 00:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.053 00:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 905609 /var/tmp/bdevperf.sock 00:18:58.053 00:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 905609 ']' 00:18:58.053 00:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.053 00:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:58.053 00:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.053 00:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:58.053 00:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.053 [2024-05-15 00:34:24.099586] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:58.053 [2024-05-15 00:34:24.099663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905609 ] 00:18:58.053 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.053 [2024-05-15 00:34:24.170713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.311 [2024-05-15 00:34:24.278678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.311 00:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:58.311 00:34:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:18:58.311 00:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N6KAhddIiN 00:18:58.569 [2024-05-15 00:34:24.661066] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:58.569 [2024-05-15 00:34:24.661171] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:58.827 TLSTESTn1 00:18:58.827 00:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:59.085 00:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:59.085 "subsystems": [ 00:18:59.085 { 00:18:59.085 "subsystem": "keyring", 00:18:59.085 "config": [] 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "subsystem": "iobuf", 00:18:59.085 "config": [ 00:18:59.085 { 00:18:59.085 "method": "iobuf_set_options", 00:18:59.085 "params": { 00:18:59.085 "small_pool_count": 8192, 00:18:59.085 "large_pool_count": 1024, 00:18:59.085 "small_bufsize": 8192, 00:18:59.085 "large_bufsize": 135168 00:18:59.085 } 00:18:59.085 } 00:18:59.085 ] 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "subsystem": "sock", 00:18:59.085 "config": [ 00:18:59.085 { 00:18:59.085 "method": "sock_impl_set_options", 00:18:59.085 "params": { 00:18:59.085 "impl_name": "posix", 00:18:59.085 "recv_buf_size": 2097152, 00:18:59.085 "send_buf_size": 2097152, 00:18:59.085 "enable_recv_pipe": true, 00:18:59.085 "enable_quickack": false, 00:18:59.085 "enable_placement_id": 0, 00:18:59.085 "enable_zerocopy_send_server": true, 00:18:59.085 "enable_zerocopy_send_client": false, 00:18:59.085 "zerocopy_threshold": 0, 00:18:59.085 "tls_version": 0, 00:18:59.085 "enable_ktls": false 00:18:59.085 } 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "method": "sock_impl_set_options", 00:18:59.085 "params": { 00:18:59.085 "impl_name": "ssl", 00:18:59.085 "recv_buf_size": 4096, 00:18:59.085 "send_buf_size": 4096, 00:18:59.085 "enable_recv_pipe": true, 00:18:59.085 "enable_quickack": false, 00:18:59.085 "enable_placement_id": 0, 00:18:59.085 "enable_zerocopy_send_server": true, 00:18:59.085 "enable_zerocopy_send_client": false, 00:18:59.085 "zerocopy_threshold": 0, 00:18:59.085 "tls_version": 0, 00:18:59.085 "enable_ktls": false 00:18:59.085 } 00:18:59.085 } 00:18:59.085 ] 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "subsystem": "vmd", 00:18:59.085 "config": [] 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "subsystem": "accel", 00:18:59.085 "config": [ 00:18:59.085 { 00:18:59.085 "method": "accel_set_options", 00:18:59.085 "params": { 00:18:59.085 "small_cache_size": 128, 00:18:59.085 "large_cache_size": 16, 00:18:59.085 "task_count": 2048, 00:18:59.085 "sequence_count": 2048, 00:18:59.085 "buf_count": 2048 00:18:59.085 } 00:18:59.085 } 00:18:59.085 ] 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "subsystem": "bdev", 00:18:59.085 "config": [ 00:18:59.085 { 00:18:59.085 "method": "bdev_set_options", 00:18:59.085 "params": { 00:18:59.085 "bdev_io_pool_size": 65535, 00:18:59.085 "bdev_io_cache_size": 256, 00:18:59.085 "bdev_auto_examine": true, 00:18:59.085 "iobuf_small_cache_size": 128, 00:18:59.085 "iobuf_large_cache_size": 16 00:18:59.085 } 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "method": "bdev_raid_set_options", 00:18:59.085 "params": { 00:18:59.085 "process_window_size_kb": 1024 00:18:59.085 } 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "method": "bdev_iscsi_set_options", 00:18:59.085 "params": { 00:18:59.085 "timeout_sec": 30 00:18:59.085 } 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "method": "bdev_nvme_set_options", 00:18:59.085 "params": { 00:18:59.085 "action_on_timeout": "none", 00:18:59.085 "timeout_us": 0, 00:18:59.085 "timeout_admin_us": 0, 00:18:59.085 "keep_alive_timeout_ms": 10000, 00:18:59.085 "arbitration_burst": 0, 00:18:59.085 "low_priority_weight": 0, 00:18:59.085 "medium_priority_weight": 0, 00:18:59.085 "high_priority_weight": 0, 00:18:59.085 "nvme_adminq_poll_period_us": 10000, 00:18:59.085 "nvme_ioq_poll_period_us": 0, 00:18:59.085 "io_queue_requests": 0, 00:18:59.085 "delay_cmd_submit": true, 00:18:59.085 "transport_retry_count": 4, 00:18:59.085 "bdev_retry_count": 3, 00:18:59.085 "transport_ack_timeout": 0, 00:18:59.085 "ctrlr_loss_timeout_sec": 0, 00:18:59.085 "reconnect_delay_sec": 0, 00:18:59.085 "fast_io_fail_timeout_sec": 0, 00:18:59.085 "disable_auto_failback": false, 00:18:59.085 "generate_uuids": false, 00:18:59.085 "transport_tos": 0, 00:18:59.085 "nvme_error_stat": false, 00:18:59.085 "rdma_srq_size": 0, 00:18:59.085 "io_path_stat": false, 00:18:59.085 "allow_accel_sequence": false, 00:18:59.085 "rdma_max_cq_size": 0, 00:18:59.085 "rdma_cm_event_timeout_ms": 0, 00:18:59.085 "dhchap_digests": [ 00:18:59.085 "sha256", 00:18:59.085 "sha384", 00:18:59.085 "sha512" 00:18:59.085 ], 00:18:59.085 "dhchap_dhgroups": [ 00:18:59.085 "null", 00:18:59.085 "ffdhe2048", 00:18:59.085 "ffdhe3072", 00:18:59.085 "ffdhe4096", 00:18:59.085 "ffdhe6144", 00:18:59.085 "ffdhe8192" 00:18:59.085 ] 00:18:59.085 } 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "method": "bdev_nvme_set_hotplug", 00:18:59.085 "params": { 00:18:59.085 "period_us": 100000, 00:18:59.085 "enable": false 00:18:59.085 } 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "method": "bdev_malloc_create", 00:18:59.085 "params": { 00:18:59.085 "name": "malloc0", 00:18:59.085 "num_blocks": 8192, 00:18:59.085 "block_size": 4096, 00:18:59.085 "physical_block_size": 4096, 00:18:59.085 "uuid": "19a31635-be4c-4f21-9e5e-128e559d161c", 00:18:59.085 "optimal_io_boundary": 0 00:18:59.085 } 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "method": "bdev_wait_for_examine" 00:18:59.085 } 00:18:59.085 ] 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "subsystem": "nbd", 00:18:59.085 "config": [] 00:18:59.085 }, 00:18:59.085 { 00:18:59.085 "subsystem": "scheduler", 00:18:59.085 "config": [ 00:18:59.085 { 00:18:59.086 "method": "framework_set_scheduler", 00:18:59.086 "params": { 00:18:59.086 "name": "static" 00:18:59.086 } 00:18:59.086 } 00:18:59.086 ] 00:18:59.086 }, 00:18:59.086 { 00:18:59.086 "subsystem": "nvmf", 00:18:59.086 "config": [ 00:18:59.086 { 00:18:59.086 "method": "nvmf_set_config", 00:18:59.086 "params": { 00:18:59.086 "discovery_filter": "match_any", 00:18:59.086 "admin_cmd_passthru": { 00:18:59.086 "identify_ctrlr": false 00:18:59.086 } 00:18:59.086 } 00:18:59.086 }, 00:18:59.086 { 00:18:59.086 "method": "nvmf_set_max_subsystems", 00:18:59.086 "params": { 00:18:59.086 "max_subsystems": 1024 00:18:59.086 } 00:18:59.086 }, 00:18:59.086 { 00:18:59.086 "method": "nvmf_set_crdt", 00:18:59.086 "params": { 00:18:59.086 "crdt1": 0, 00:18:59.086 "crdt2": 0, 00:18:59.086 "crdt3": 0 00:18:59.086 } 00:18:59.086 }, 00:18:59.086 { 00:18:59.086 "method": "nvmf_create_transport", 00:18:59.086 "params": { 00:18:59.086 "trtype": "TCP", 00:18:59.086 "max_queue_depth": 128, 00:18:59.086 "max_io_qpairs_per_ctrlr": 127, 00:18:59.086 "in_capsule_data_size": 4096, 00:18:59.086 "max_io_size": 131072, 00:18:59.086 "io_unit_size": 131072, 00:18:59.086 "max_aq_depth": 128, 00:18:59.086 "num_shared_buffers": 511, 00:18:59.086 "buf_cache_size": 4294967295, 00:18:59.086 "dif_insert_or_strip": false, 00:18:59.086 "zcopy": false, 00:18:59.086 "c2h_success": false, 00:18:59.086 "sock_priority": 0, 00:18:59.086 "abort_timeout_sec": 1, 00:18:59.086 "ack_timeout": 0, 00:18:59.086 "data_wr_pool_size": 0 00:18:59.086 } 00:18:59.086 }, 00:18:59.086 { 00:18:59.086 "method": "nvmf_create_subsystem", 00:18:59.086 "params": { 00:18:59.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.086 "allow_any_host": false, 00:18:59.086 "serial_number": "SPDK00000000000001", 00:18:59.086 "model_number": "SPDK bdev Controller", 00:18:59.086 "max_namespaces": 10, 00:18:59.086 "min_cntlid": 1, 00:18:59.086 "max_cntlid": 65519, 00:18:59.086 "ana_reporting": false 00:18:59.086 } 00:18:59.086 }, 00:18:59.086 { 00:18:59.086 "method": "nvmf_subsystem_add_host", 00:18:59.086 "params": { 00:18:59.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.086 "host": "nqn.2016-06.io.spdk:host1", 00:18:59.086 "psk": "/tmp/tmp.N6KAhddIiN" 00:18:59.086 } 00:18:59.086 }, 00:18:59.086 { 00:18:59.086 "method": "nvmf_subsystem_add_ns", 00:18:59.086 "params": { 00:18:59.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.086 "namespace": { 00:18:59.086 "nsid": 1, 00:18:59.086 "bdev_name": "malloc0", 00:18:59.086 "nguid": "19A31635BE4C4F219E5E128E559D161C", 00:18:59.086 "uuid": "19a31635-be4c-4f21-9e5e-128e559d161c", 00:18:59.086 "no_auto_visible": false 00:18:59.086 } 00:18:59.086 } 00:18:59.086 }, 00:18:59.086 { 00:18:59.086 "method": "nvmf_subsystem_add_listener", 00:18:59.086 "params": { 00:18:59.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.086 "listen_address": { 00:18:59.086 "trtype": "TCP", 00:18:59.086 "adrfam": "IPv4", 00:18:59.086 "traddr": "10.0.0.2", 00:18:59.086 "trsvcid": "4420" 00:18:59.086 }, 00:18:59.086 "secure_channel": true 00:18:59.086 } 00:18:59.086 } 00:18:59.086 ] 00:18:59.086 } 00:18:59.086 ] 00:18:59.086 }' 00:18:59.086 00:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:59.344 00:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:59.344 "subsystems": [ 00:18:59.344 { 00:18:59.344 "subsystem": "keyring", 00:18:59.344 "config": [] 00:18:59.344 }, 00:18:59.344 { 00:18:59.344 "subsystem": "iobuf", 00:18:59.344 "config": [ 00:18:59.344 { 00:18:59.344 "method": "iobuf_set_options", 00:18:59.344 "params": { 00:18:59.344 "small_pool_count": 8192, 00:18:59.344 "large_pool_count": 1024, 00:18:59.344 "small_bufsize": 8192, 00:18:59.344 "large_bufsize": 135168 00:18:59.344 } 00:18:59.344 } 00:18:59.344 ] 00:18:59.344 }, 00:18:59.344 { 00:18:59.344 "subsystem": "sock", 00:18:59.344 "config": [ 00:18:59.344 { 00:18:59.344 "method": "sock_impl_set_options", 00:18:59.344 "params": { 00:18:59.344 "impl_name": "posix", 00:18:59.344 "recv_buf_size": 2097152, 00:18:59.344 "send_buf_size": 2097152, 00:18:59.344 "enable_recv_pipe": true, 00:18:59.344 "enable_quickack": false, 00:18:59.344 "enable_placement_id": 0, 00:18:59.344 "enable_zerocopy_send_server": true, 00:18:59.344 "enable_zerocopy_send_client": false, 00:18:59.344 "zerocopy_threshold": 0, 00:18:59.344 "tls_version": 0, 00:18:59.344 "enable_ktls": false 00:18:59.344 } 00:18:59.344 }, 00:18:59.344 { 00:18:59.344 "method": "sock_impl_set_options", 00:18:59.344 "params": { 00:18:59.344 "impl_name": "ssl", 00:18:59.344 "recv_buf_size": 4096, 00:18:59.344 "send_buf_size": 4096, 00:18:59.344 "enable_recv_pipe": true, 00:18:59.344 "enable_quickack": false, 00:18:59.344 "enable_placement_id": 0, 00:18:59.344 "enable_zerocopy_send_server": true, 00:18:59.344 "enable_zerocopy_send_client": false, 00:18:59.344 "zerocopy_threshold": 0, 00:18:59.344 "tls_version": 0, 00:18:59.344 "enable_ktls": false 00:18:59.344 } 00:18:59.344 } 00:18:59.344 ] 00:18:59.344 }, 00:18:59.344 { 00:18:59.344 "subsystem": "vmd", 00:18:59.344 "config": [] 00:18:59.344 }, 00:18:59.344 { 00:18:59.344 "subsystem": "accel", 00:18:59.344 "config": [ 00:18:59.344 { 00:18:59.344 "method": "accel_set_options", 00:18:59.344 "params": { 00:18:59.344 "small_cache_size": 128, 00:18:59.344 "large_cache_size": 16, 00:18:59.344 "task_count": 2048, 00:18:59.345 "sequence_count": 2048, 00:18:59.345 "buf_count": 2048 00:18:59.345 } 00:18:59.345 } 00:18:59.345 ] 00:18:59.345 }, 00:18:59.345 { 00:18:59.345 "subsystem": "bdev", 00:18:59.345 "config": [ 00:18:59.345 { 00:18:59.345 "method": "bdev_set_options", 00:18:59.345 "params": { 00:18:59.345 "bdev_io_pool_size": 65535, 00:18:59.345 "bdev_io_cache_size": 256, 00:18:59.345 "bdev_auto_examine": true, 00:18:59.345 "iobuf_small_cache_size": 128, 00:18:59.345 "iobuf_large_cache_size": 16 00:18:59.345 } 00:18:59.345 }, 00:18:59.345 { 00:18:59.345 "method": "bdev_raid_set_options", 00:18:59.345 "params": { 00:18:59.345 "process_window_size_kb": 1024 00:18:59.345 } 00:18:59.345 }, 00:18:59.345 { 00:18:59.345 "method": "bdev_iscsi_set_options", 00:18:59.345 "params": { 00:18:59.345 "timeout_sec": 30 00:18:59.345 } 00:18:59.345 }, 00:18:59.345 { 00:18:59.345 "method": "bdev_nvme_set_options", 00:18:59.345 "params": { 00:18:59.345 "action_on_timeout": "none", 00:18:59.345 "timeout_us": 0, 00:18:59.345 "timeout_admin_us": 0, 00:18:59.345 "keep_alive_timeout_ms": 10000, 00:18:59.345 "arbitration_burst": 0, 00:18:59.345 "low_priority_weight": 0, 00:18:59.345 "medium_priority_weight": 0, 00:18:59.345 "high_priority_weight": 0, 00:18:59.345 "nvme_adminq_poll_period_us": 10000, 00:18:59.345 "nvme_ioq_poll_period_us": 0, 00:18:59.345 "io_queue_requests": 512, 00:18:59.345 "delay_cmd_submit": true, 00:18:59.345 "transport_retry_count": 4, 00:18:59.345 "bdev_retry_count": 3, 00:18:59.345 "transport_ack_timeout": 0, 00:18:59.345 "ctrlr_loss_timeout_sec": 0, 00:18:59.345 "reconnect_delay_sec": 0, 00:18:59.345 "fast_io_fail_timeout_sec": 0, 00:18:59.345 "disable_auto_failback": false, 00:18:59.345 "generate_uuids": false, 00:18:59.345 "transport_tos": 0, 00:18:59.345 "nvme_error_stat": false, 00:18:59.345 "rdma_srq_size": 0, 00:18:59.345 "io_path_stat": false, 00:18:59.345 "allow_accel_sequence": false, 00:18:59.345 "rdma_max_cq_size": 0, 00:18:59.345 "rdma_cm_event_timeout_ms": 0, 00:18:59.345 "dhchap_digests": [ 00:18:59.345 "sha256", 00:18:59.345 "sha384", 00:18:59.345 "sha512" 00:18:59.345 ], 00:18:59.345 "dhchap_dhgroups": [ 00:18:59.345 "null", 00:18:59.345 "ffdhe2048", 00:18:59.345 "ffdhe3072", 00:18:59.345 "ffdhe4096", 00:18:59.345 "ffdhe6144", 00:18:59.345 "ffdhe8192" 00:18:59.345 ] 00:18:59.345 } 00:18:59.345 }, 00:18:59.345 { 00:18:59.345 "method": "bdev_nvme_attach_controller", 00:18:59.345 "params": { 00:18:59.345 "name": "TLSTEST", 00:18:59.345 "trtype": "TCP", 00:18:59.345 "adrfam": "IPv4", 00:18:59.345 "traddr": "10.0.0.2", 00:18:59.345 "trsvcid": "4420", 00:18:59.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.345 "prchk_reftag": false, 00:18:59.345 "prchk_guard": false, 00:18:59.345 "ctrlr_loss_timeout_sec": 0, 00:18:59.345 "reconnect_delay_sec": 0, 00:18:59.345 "fast_io_fail_timeout_sec": 0, 00:18:59.345 "psk": "/tmp/tmp.N6KAhddIiN", 00:18:59.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.345 "hdgst": false, 00:18:59.345 "ddgst": false 00:18:59.345 } 00:18:59.345 }, 00:18:59.345 { 00:18:59.345 "method": "bdev_nvme_set_hotplug", 00:18:59.345 "params": { 00:18:59.345 "period_us": 100000, 00:18:59.345 "enable": false 00:18:59.345 } 00:18:59.345 }, 00:18:59.345 { 00:18:59.345 "method": "bdev_wait_for_examine" 00:18:59.345 } 00:18:59.345 ] 00:18:59.345 }, 00:18:59.345 { 00:18:59.345 "subsystem": "nbd", 00:18:59.345 "config": [] 00:18:59.345 } 00:18:59.345 ] 00:18:59.345 }' 00:18:59.345 00:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 905609 00:18:59.345 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 905609 ']' 00:18:59.345 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 905609 00:18:59.345 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:59.345 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:59.345 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 905609 00:18:59.345 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:18:59.345 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:18:59.345 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 905609' 00:18:59.345 killing process with pid 905609 00:18:59.345 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 905609 00:18:59.345 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.345 00:18:59.345 Latency(us) 00:18:59.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.345 =================================================================================================================== 00:18:59.345 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:59.345 [2024-05-15 00:34:25.420205] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:59.345 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 905609 00:18:59.603 00:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 905315 00:18:59.603 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 905315 ']' 00:18:59.603 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 905315 00:18:59.603 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:18:59.603 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:59.603 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 905315 00:18:59.603 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:59.603 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:59.603 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 905315' 00:18:59.603 killing process with pid 905315 00:18:59.603 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 905315 00:18:59.603 [2024-05-15 00:34:25.716589] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:59.603 [2024-05-15 00:34:25.716657] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:59.603 00:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 905315 00:18:59.861 00:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:59.861 00:34:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.861 00:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:59.861 "subsystems": [ 00:18:59.861 { 00:18:59.861 "subsystem": "keyring", 00:18:59.861 "config": [] 00:18:59.861 }, 00:18:59.861 { 00:18:59.861 "subsystem": "iobuf", 00:18:59.861 "config": [ 00:18:59.861 { 00:18:59.861 "method": "iobuf_set_options", 00:18:59.861 "params": { 00:18:59.861 "small_pool_count": 8192, 00:18:59.861 "large_pool_count": 1024, 00:18:59.861 "small_bufsize": 8192, 00:18:59.861 "large_bufsize": 135168 00:18:59.861 } 00:18:59.861 } 00:18:59.861 ] 00:18:59.861 }, 00:18:59.861 { 00:18:59.861 "subsystem": "sock", 00:18:59.861 "config": [ 00:18:59.861 { 00:18:59.861 "method": "sock_impl_set_options", 00:18:59.861 "params": { 00:18:59.861 "impl_name": "posix", 00:18:59.861 "recv_buf_size": 2097152, 00:18:59.861 "send_buf_size": 2097152, 00:18:59.861 "enable_recv_pipe": true, 00:18:59.861 "enable_quickack": false, 00:18:59.861 "enable_placement_id": 0, 00:18:59.861 "enable_zerocopy_send_server": true, 00:18:59.861 "enable_zerocopy_send_client": false, 00:18:59.862 "zerocopy_threshold": 0, 00:18:59.862 "tls_version": 0, 00:18:59.862 "enable_ktls": false 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "sock_impl_set_options", 00:18:59.862 "params": { 00:18:59.862 "impl_name": "ssl", 00:18:59.862 "recv_buf_size": 4096, 00:18:59.862 "send_buf_size": 4096, 00:18:59.862 "enable_recv_pipe": true, 00:18:59.862 "enable_quickack": false, 00:18:59.862 "enable_placement_id": 0, 00:18:59.862 "enable_zerocopy_send_server": true, 00:18:59.862 "enable_zerocopy_send_client": false, 00:18:59.862 "zerocopy_threshold": 0, 00:18:59.862 "tls_version": 0, 00:18:59.862 "enable_ktls": false 00:18:59.862 } 00:18:59.862 } 00:18:59.862 ] 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "subsystem": "vmd", 00:18:59.862 "config": [] 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "subsystem": "accel", 00:18:59.862 "config": [ 00:18:59.862 { 00:18:59.862 "method": "accel_set_options", 00:18:59.862 "params": { 00:18:59.862 "small_cache_size": 128, 00:18:59.862 "large_cache_size": 16, 00:18:59.862 "task_count": 2048, 00:18:59.862 "sequence_count": 2048, 00:18:59.862 "buf_count": 2048 00:18:59.862 } 00:18:59.862 } 00:18:59.862 ] 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "subsystem": "bdev", 00:18:59.862 "config": [ 00:18:59.862 { 00:18:59.862 "method": "bdev_set_options", 00:18:59.862 "params": { 00:18:59.862 "bdev_io_pool_size": 65535, 00:18:59.862 "bdev_io_cache_size": 256, 00:18:59.862 "bdev_auto_examine": true, 00:18:59.862 "iobuf_small_cache_size": 128, 00:18:59.862 "iobuf_large_cache_size": 16 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "bdev_raid_set_options", 00:18:59.862 "params": { 00:18:59.862 "process_window_size_kb": 1024 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "bdev_iscsi_set_options", 00:18:59.862 "params": { 00:18:59.862 "timeout_sec": 30 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "bdev_nvme_set_options", 00:18:59.862 "params": { 00:18:59.862 "action_on_timeout": "none", 00:18:59.862 "timeout_us": 0, 00:18:59.862 "timeout_admin_us": 0, 00:18:59.862 "keep_alive_timeout_ms": 10000, 00:18:59.862 "arbitration_burst": 0, 00:18:59.862 "low_priority_weight": 0, 00:18:59.862 "medium_priority_weight": 0, 00:18:59.862 "high_priority_weight": 0, 00:18:59.862 "nvme_adminq_poll_period_us": 10000, 00:18:59.862 "nvme_ioq_poll_period_us": 0, 00:18:59.862 "io_queue_requests": 0, 00:18:59.862 "delay_cmd_submit": true, 00:18:59.862 "transport_retry_count": 4, 00:18:59.862 "bdev_retry_count": 3, 00:18:59.862 "transport_ack_timeout": 0, 00:18:59.862 "ctrlr_loss_timeout_sec": 0, 00:18:59.862 "reconnect_delay_sec": 0, 00:18:59.862 "fast_io_fail_timeout_sec": 0, 00:18:59.862 "disable_auto_failback": false, 00:18:59.862 "generate_uuids": false, 00:18:59.862 "transport_tos": 0, 00:18:59.862 "nvme_error_stat": false, 00:18:59.862 "rdma_srq_size": 0, 00:18:59.862 "io_path_stat": false, 00:18:59.862 "allow_accel_sequence": false, 00:18:59.862 "rdma_max_cq_size": 0, 00:18:59.862 "rdma_cm_event_timeout_ms": 0, 00:18:59.862 "dhchap_digests": [ 00:18:59.862 "sha256", 00:18:59.862 "sha384", 00:18:59.862 "sha512" 00:18:59.862 ], 00:18:59.862 "dhchap_dhgroups": [ 00:18:59.862 "null", 00:18:59.862 "ffdhe2048", 00:18:59.862 "ffdhe3072", 00:18:59.862 "ffdhe4096", 00:18:59.862 "ffdhe6144", 00:18:59.862 "ffdhe8192" 00:18:59.862 ] 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "bdev_nvme_set_hotplug", 00:18:59.862 "params": { 00:18:59.862 "period_us": 100000, 00:18:59.862 "enable": false 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "bdev_malloc_create", 00:18:59.862 "params": { 00:18:59.862 "name": "malloc0", 00:18:59.862 "num_blocks": 8192, 00:18:59.862 "block_size": 4096, 00:18:59.862 "physical_block_size": 4096, 00:18:59.862 "uuid": "19a31635-be4c-4f21-9e5e-128e559d161c", 00:18:59.862 "optimal_io_boundary": 0 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "bdev_wait_for_examine" 00:18:59.862 } 00:18:59.862 ] 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "subsystem": "nbd", 00:18:59.862 "config": [] 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "subsystem": "scheduler", 00:18:59.862 "config": [ 00:18:59.862 { 00:18:59.862 "method": "framework_set_scheduler", 00:18:59.862 "params": { 00:18:59.862 "name": "static" 00:18:59.862 } 00:18:59.862 } 00:18:59.862 ] 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "subsystem": "nvmf", 00:18:59.862 "config": [ 00:18:59.862 { 00:18:59.862 "method": "nvmf_set_config", 00:18:59.862 "params": { 00:18:59.862 "discovery_filter": "match_any", 00:18:59.862 "admin_cmd_passthru": { 00:18:59.862 "identify_ctrlr": false 00:18:59.862 } 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "nvmf_set_max_subsystems", 00:18:59.862 "params": { 00:18:59.862 "max_subsystems": 1024 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "nvmf_set_crdt", 00:18:59.862 "params": { 00:18:59.862 "crdt1": 0, 00:18:59.862 "crdt2": 0, 00:18:59.862 "crdt3": 0 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "nvmf_create_transport", 00:18:59.862 "params": { 00:18:59.862 "trtype": "TCP", 00:18:59.862 "max_queue_depth": 128, 00:18:59.862 "max_io_qpairs_per_ctrlr": 127, 00:18:59.862 "in_capsule_data_size": 4096, 00:18:59.862 "max_io_size": 131072, 00:18:59.862 "io_unit_size": 131072, 00:18:59.862 "max_aq_depth": 128, 00:18:59.862 "num_shared_buffers": 511, 00:18:59.862 "buf_cache_size": 4294967295, 00:18:59.862 "dif_insert_or_strip": false, 00:18:59.862 "zcopy": false, 00:18:59.862 "c2h_success": false, 00:18:59.862 "sock_priority": 0, 00:18:59.862 "abort_timeout_sec": 1, 00:18:59.862 "ack_timeout": 0, 00:18:59.862 "data_wr_pool_size": 0 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "nvmf_create_subsystem", 00:18:59.862 "params": { 00:18:59.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.862 "allow_any_host": false, 00:18:59.862 "serial_number": "SPDK00000000000001", 00:18:59.862 "model_number": "SPDK bdev Controller", 00:18:59.862 "max_namespaces": 10, 00:18:59.862 "min_cntlid": 1, 00:18:59.862 "max_cntlid": 65519, 00:18:59.862 "ana_reporting": false 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "nvmf_subsystem_add_host", 00:18:59.862 "params": { 00:18:59.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.862 "host": "nqn.2016-06.io.spdk:host1", 00:18:59.862 "psk": "/tmp/tmp.N6KAhddIiN" 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "nvmf_subsystem_add_ns", 00:18:59.862 "params": { 00:18:59.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.862 "namespace": { 00:18:59.862 "nsid": 1, 00:18:59.862 "bdev_name": "malloc0", 00:18:59.862 "nguid": "19A31635BE4C4F219E5E128E559D161C", 00:18:59.862 "uuid": "19a31635-be4c-4f21-9e5e-128e559d161c", 00:18:59.862 "no_auto_visible": false 00:18:59.862 } 00:18:59.862 } 00:18:59.862 }, 00:18:59.862 { 00:18:59.862 "method": "nvmf_subsystem_add_listener", 00:18:59.862 "params": { 00:18:59.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.862 "listen_address": { 00:18:59.862 "trtype": "TCP", 00:18:59.862 "adrfam": "IPv4", 00:18:59.862 "traddr": "10.0.0.2", 00:18:59.862 "trsvcid": "4420" 00:18:59.862 }, 00:18:59.862 "secure_channel": true 00:18:59.862 } 00:18:59.862 } 00:18:59.862 ] 00:18:59.862 } 00:18:59.862 ] 00:18:59.862 }' 00:18:59.862 00:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:59.862 00:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.121 00:34:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=905882 00:19:00.121 00:34:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:00.121 00:34:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 905882 00:19:00.121 00:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 905882 ']' 00:19:00.121 00:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.121 00:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:00.121 00:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.121 00:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:00.121 00:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.121 [2024-05-15 00:34:26.071501] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:00.121 [2024-05-15 00:34:26.071581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.121 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.121 [2024-05-15 00:34:26.150601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.121 [2024-05-15 00:34:26.266769] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.121 [2024-05-15 00:34:26.266828] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.121 [2024-05-15 00:34:26.266844] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.121 [2024-05-15 00:34:26.266858] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.121 [2024-05-15 00:34:26.266870] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.121 [2024-05-15 00:34:26.266978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.379 [2024-05-15 00:34:26.495560] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.379 [2024-05-15 00:34:26.511503] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:00.379 [2024-05-15 00:34:26.527520] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:00.379 [2024-05-15 00:34:26.527597] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.379 [2024-05-15 00:34:26.536099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=906034 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 906034 /var/tmp/bdevperf.sock 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 906034 ']' 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:00.944 00:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:00.944 "subsystems": [ 00:19:00.944 { 00:19:00.944 "subsystem": "keyring", 00:19:00.944 "config": [] 00:19:00.944 }, 00:19:00.944 { 00:19:00.944 "subsystem": "iobuf", 00:19:00.944 "config": [ 00:19:00.944 { 00:19:00.944 "method": "iobuf_set_options", 00:19:00.944 "params": { 00:19:00.944 "small_pool_count": 8192, 00:19:00.944 "large_pool_count": 1024, 00:19:00.944 "small_bufsize": 8192, 00:19:00.944 "large_bufsize": 135168 00:19:00.944 } 00:19:00.945 } 00:19:00.945 ] 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "subsystem": "sock", 00:19:00.945 "config": [ 00:19:00.945 { 00:19:00.945 "method": "sock_impl_set_options", 00:19:00.945 "params": { 00:19:00.945 "impl_name": "posix", 00:19:00.945 "recv_buf_size": 2097152, 00:19:00.945 "send_buf_size": 2097152, 00:19:00.945 "enable_recv_pipe": true, 00:19:00.945 "enable_quickack": false, 00:19:00.945 "enable_placement_id": 0, 00:19:00.945 "enable_zerocopy_send_server": true, 00:19:00.945 "enable_zerocopy_send_client": false, 00:19:00.945 "zerocopy_threshold": 0, 00:19:00.945 "tls_version": 0, 00:19:00.945 "enable_ktls": false 00:19:00.945 } 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "method": "sock_impl_set_options", 00:19:00.945 "params": { 00:19:00.945 "impl_name": "ssl", 00:19:00.945 "recv_buf_size": 4096, 00:19:00.945 "send_buf_size": 4096, 00:19:00.945 "enable_recv_pipe": true, 00:19:00.945 "enable_quickack": false, 00:19:00.945 "enable_placement_id": 0, 00:19:00.945 "enable_zerocopy_send_server": true, 00:19:00.945 "enable_zerocopy_send_client": false, 00:19:00.945 "zerocopy_threshold": 0, 00:19:00.945 "tls_version": 0, 00:19:00.945 "enable_ktls": false 00:19:00.945 } 00:19:00.945 } 00:19:00.945 ] 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "subsystem": "vmd", 00:19:00.945 "config": [] 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "subsystem": "accel", 00:19:00.945 "config": [ 00:19:00.945 { 00:19:00.945 "method": "accel_set_options", 00:19:00.945 "params": { 00:19:00.945 "small_cache_size": 128, 00:19:00.945 "large_cache_size": 16, 00:19:00.945 "task_count": 2048, 00:19:00.945 "sequence_count": 2048, 00:19:00.945 "buf_count": 2048 00:19:00.945 } 00:19:00.945 } 00:19:00.945 ] 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "subsystem": "bdev", 00:19:00.945 "config": [ 00:19:00.945 { 00:19:00.945 "method": "bdev_set_options", 00:19:00.945 "params": { 00:19:00.945 "bdev_io_pool_size": 65535, 00:19:00.945 "bdev_io_cache_size": 256, 00:19:00.945 "bdev_auto_examine": true, 00:19:00.945 "iobuf_small_cache_size": 128, 00:19:00.945 "iobuf_large_cache_size": 16 00:19:00.945 } 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "method": "bdev_raid_set_options", 00:19:00.945 "params": { 00:19:00.945 "process_window_size_kb": 1024 00:19:00.945 } 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "method": "bdev_iscsi_set_options", 00:19:00.945 "params": { 00:19:00.945 "timeout_sec": 30 00:19:00.945 } 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "method": "bdev_nvme_set_options", 00:19:00.945 "params": { 00:19:00.945 "action_on_timeout": "none", 00:19:00.945 "timeout_us": 0, 00:19:00.945 "timeout_admin_us": 0, 00:19:00.945 "keep_alive_timeout_ms": 10000, 00:19:00.945 "arbitration_burst": 0, 00:19:00.945 "low_priority_weight": 0, 00:19:00.945 "medium_priority_weight": 0, 00:19:00.945 "high_priority_weight": 0, 00:19:00.945 "nvme_adminq_poll_period_us": 10000, 00:19:00.945 "nvme_ioq_poll_period_us": 0, 00:19:00.945 "io_queue_requests": 512, 00:19:00.945 "delay_cmd_submit": true, 00:19:00.945 "transport_retry_count": 4, 00:19:00.945 "bdev_retry_count": 3, 00:19:00.945 "transport_ack_timeout": 0, 00:19:00.945 "ctrlr_loss_timeout_sec": 0, 00:19:00.945 "reconnect_delay_sec": 0, 00:19:00.945 "fast_io_fail_timeout_sec": 0, 00:19:00.945 "disable_auto_failback": false, 00:19:00.945 "generate_uuids": false, 00:19:00.945 "transport_tos": 0, 00:19:00.945 "nvme_error_stat": false, 00:19:00.945 "rdma_srq_size": 0, 00:19:00.945 "io_path_stat": false, 00:19:00.945 "allow_accel_sequence": false, 00:19:00.945 "rdma_max_cq_size": 0, 00:19:00.945 "rdma_cm_event_timeout_ms": 0, 00:19:00.945 "dhchap_digests": [ 00:19:00.945 "sha256", 00:19:00.945 "sha384", 00:19:00.945 "sha512" 00:19:00.945 ], 00:19:00.945 "dhchap_dhgroups": [ 00:19:00.945 "null", 00:19:00.945 "ffdhe2048", 00:19:00.945 "ffdhe3072", 00:19:00.945 "ffdhe4096", 00:19:00.945 "ffdhe6144", 00:19:00.945 "ffdhe8192" 00:19:00.945 ] 00:19:00.945 } 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "method": "bdev_nvme_attach_controller", 00:19:00.945 "params": { 00:19:00.945 "name": "TLSTEST", 00:19:00.945 "trtype": "TCP", 00:19:00.945 "adrfam": "IPv4", 00:19:00.945 "traddr": "10.0.0.2", 00:19:00.945 "trsvcid": "4420", 00:19:00.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.945 "prchk_reftag": false, 00:19:00.945 "prchk_guard": false, 00:19:00.945 "ctrlr_loss_timeout_sec": 0, 00:19:00.945 "reconnect_delay_sec": 0, 00:19:00.945 "fast_io_fail_timeout_sec": 0, 00:19:00.945 "psk": "/tmp/tmp.N6KAhddIiN", 00:19:00.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:00.945 "hdgst": false, 00:19:00.945 "ddgst": false 00:19:00.945 } 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "method": "bdev_nvme_set_hotplug", 00:19:00.945 "params": { 00:19:00.945 "period_us": 100000, 00:19:00.945 "enable": false 00:19:00.945 } 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "method": "bdev_wait_for_examine" 00:19:00.945 } 00:19:00.945 ] 00:19:00.945 }, 00:19:00.945 { 00:19:00.945 "subsystem": "nbd", 00:19:00.945 "config": [] 00:19:00.945 } 00:19:00.945 ] 00:19:00.945 }' 00:19:00.945 00:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.945 00:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:00.945 00:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.203 [2024-05-15 00:34:27.127439] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:01.203 [2024-05-15 00:34:27.127519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906034 ] 00:19:01.203 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.203 [2024-05-15 00:34:27.197798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.203 [2024-05-15 00:34:27.306410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.461 [2024-05-15 00:34:27.469244] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.461 [2024-05-15 00:34:27.469407] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:02.026 00:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:02.026 00:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:02.026 00:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:02.286 Running I/O for 10 seconds... 00:19:12.245 00:19:12.245 Latency(us) 00:19:12.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.245 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:12.245 Verification LBA range: start 0x0 length 0x2000 00:19:12.245 TLSTESTn1 : 10.05 1124.60 4.39 0.00 0.00 113592.46 11553.75 121168.78 00:19:12.245 =================================================================================================================== 00:19:12.245 Total : 1124.60 4.39 0.00 0.00 113592.46 11553.75 121168.78 00:19:12.245 0 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 906034 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 906034 ']' 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 906034 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 906034 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 906034' 00:19:12.245 killing process with pid 906034 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 906034 00:19:12.245 Received shutdown signal, test time was about 10.000000 seconds 00:19:12.245 00:19:12.245 Latency(us) 00:19:12.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.245 =================================================================================================================== 00:19:12.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.245 [2024-05-15 00:34:38.333318] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:12.245 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 906034 00:19:12.503 00:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 905882 00:19:12.503 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 905882 ']' 00:19:12.503 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 905882 00:19:12.503 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:19:12.503 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:12.503 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 905882 00:19:12.503 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:19:12.503 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:19:12.503 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 905882' 00:19:12.503 killing process with pid 905882 00:19:12.503 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 905882 00:19:12.503 [2024-05-15 00:34:38.633522] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:12.503 [2024-05-15 00:34:38.633590] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:12.503 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 905882 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=907371 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 907371 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 907371 ']' 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:12.760 00:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.016 [2024-05-15 00:34:38.962626] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:13.016 [2024-05-15 00:34:38.962725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.016 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.016 [2024-05-15 00:34:39.039506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.016 [2024-05-15 00:34:39.145872] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.016 [2024-05-15 00:34:39.145924] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.016 [2024-05-15 00:34:39.145967] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.016 [2024-05-15 00:34:39.145985] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.017 [2024-05-15 00:34:39.146024] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.017 [2024-05-15 00:34:39.146061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.274 00:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:13.274 00:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:13.274 00:34:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:13.274 00:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:13.274 00:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.274 00:34:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.274 00:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.N6KAhddIiN 00:19:13.274 00:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.N6KAhddIiN 00:19:13.274 00:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:13.532 [2024-05-15 00:34:39.560317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.532 00:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:13.790 00:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:14.048 [2024-05-15 00:34:40.121817] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:14.048 [2024-05-15 00:34:40.121966] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.048 [2024-05-15 00:34:40.122271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.048 00:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:14.306 malloc0 00:19:14.306 00:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:14.593 00:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N6KAhddIiN 00:19:14.852 [2024-05-15 00:34:40.891721] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:14.852 00:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=907651 00:19:14.852 00:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:14.852 00:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.852 00:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 907651 /var/tmp/bdevperf.sock 00:19:14.852 00:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 907651 ']' 00:19:14.852 00:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.852 00:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:14.852 00:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.852 00:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:14.852 00:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.852 [2024-05-15 00:34:40.949867] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:14.852 [2024-05-15 00:34:40.949970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907651 ] 00:19:14.852 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.110 [2024-05-15 00:34:41.019815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.110 [2024-05-15 00:34:41.129880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.110 00:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:15.110 00:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:15.110 00:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.N6KAhddIiN 00:19:15.368 00:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:15.626 [2024-05-15 00:34:41.722281] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.884 nvme0n1 00:19:15.884 00:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:15.884 Running I/O for 1 seconds... 00:19:17.258 00:19:17.258 Latency(us) 00:19:17.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.258 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:17.258 Verification LBA range: start 0x0 length 0x2000 00:19:17.258 nvme0n1 : 1.08 1387.86 5.42 0.00 0.00 89702.56 6456.51 129712.73 00:19:17.258 =================================================================================================================== 00:19:17.258 Total : 1387.86 5.42 0.00 0.00 89702.56 6456.51 129712.73 00:19:17.258 0 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 907651 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 907651 ']' 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 907651 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 907651 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 907651' 00:19:17.258 killing process with pid 907651 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 907651 00:19:17.258 Received shutdown signal, test time was about 1.000000 seconds 00:19:17.258 00:19:17.258 Latency(us) 00:19:17.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.258 =================================================================================================================== 00:19:17.258 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 907651 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 907371 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 907371 ']' 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 907371 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 907371 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 907371' 00:19:17.258 killing process with pid 907371 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 907371 00:19:17.258 [2024-05-15 00:34:43.343487] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:17.258 [2024-05-15 00:34:43.343545] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:17.258 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 907371 00:19:17.514 00:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=908058 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 908058 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 908058 ']' 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:17.515 00:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.771 [2024-05-15 00:34:43.684365] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:17.771 [2024-05-15 00:34:43.684447] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.771 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.771 [2024-05-15 00:34:43.756381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.771 [2024-05-15 00:34:43.862640] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.771 [2024-05-15 00:34:43.862692] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.771 [2024-05-15 00:34:43.862711] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.771 [2024-05-15 00:34:43.862727] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.771 [2024-05-15 00:34:43.862741] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.771 [2024-05-15 00:34:43.862775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.705 [2024-05-15 00:34:44.658592] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.705 malloc0 00:19:18.705 [2024-05-15 00:34:44.690800] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:18.705 [2024-05-15 00:34:44.690903] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.705 [2024-05-15 00:34:44.691197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=908185 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 908185 /var/tmp/bdevperf.sock 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 908185 ']' 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:18.705 00:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.705 [2024-05-15 00:34:44.760499] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:18.705 [2024-05-15 00:34:44.760560] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908185 ] 00:19:18.705 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.705 [2024-05-15 00:34:44.834054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.964 [2024-05-15 00:34:44.950511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.897 00:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:19.897 00:34:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:19.897 00:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.N6KAhddIiN 00:19:19.897 00:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:20.155 [2024-05-15 00:34:46.207472] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.155 nvme0n1 00:19:20.155 00:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:20.413 Running I/O for 1 seconds... 00:19:21.347 00:19:21.347 Latency(us) 00:19:21.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.347 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:21.347 Verification LBA range: start 0x0 length 0x2000 00:19:21.347 nvme0n1 : 1.09 1310.62 5.12 0.00 0.00 94794.20 6941.96 131266.18 00:19:21.347 =================================================================================================================== 00:19:21.347 Total : 1310.62 5.12 0.00 0.00 94794.20 6941.96 131266.18 00:19:21.347 0 00:19:21.605 00:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:21.605 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.605 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.605 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.605 00:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:19:21.605 "subsystems": [ 00:19:21.605 { 00:19:21.605 "subsystem": "keyring", 00:19:21.605 "config": [ 00:19:21.605 { 00:19:21.605 "method": "keyring_file_add_key", 00:19:21.605 "params": { 00:19:21.605 "name": "key0", 00:19:21.605 "path": "/tmp/tmp.N6KAhddIiN" 00:19:21.605 } 00:19:21.605 } 00:19:21.605 ] 00:19:21.605 }, 00:19:21.605 { 00:19:21.605 "subsystem": "iobuf", 00:19:21.605 "config": [ 00:19:21.605 { 00:19:21.605 "method": "iobuf_set_options", 00:19:21.605 "params": { 00:19:21.605 "small_pool_count": 8192, 00:19:21.605 "large_pool_count": 1024, 00:19:21.605 "small_bufsize": 8192, 00:19:21.605 "large_bufsize": 135168 00:19:21.605 } 00:19:21.605 } 00:19:21.605 ] 00:19:21.605 }, 00:19:21.605 { 00:19:21.605 "subsystem": "sock", 00:19:21.605 "config": [ 00:19:21.605 { 00:19:21.605 "method": "sock_impl_set_options", 00:19:21.605 "params": { 00:19:21.605 "impl_name": "posix", 00:19:21.605 "recv_buf_size": 2097152, 00:19:21.605 "send_buf_size": 2097152, 00:19:21.605 "enable_recv_pipe": true, 00:19:21.605 "enable_quickack": false, 00:19:21.605 "enable_placement_id": 0, 00:19:21.605 "enable_zerocopy_send_server": true, 00:19:21.605 "enable_zerocopy_send_client": false, 00:19:21.605 "zerocopy_threshold": 0, 00:19:21.605 "tls_version": 0, 00:19:21.605 "enable_ktls": false 00:19:21.605 } 00:19:21.605 }, 00:19:21.605 { 00:19:21.605 "method": "sock_impl_set_options", 00:19:21.605 "params": { 00:19:21.605 "impl_name": "ssl", 00:19:21.605 "recv_buf_size": 4096, 00:19:21.605 "send_buf_size": 4096, 00:19:21.605 "enable_recv_pipe": true, 00:19:21.605 "enable_quickack": false, 00:19:21.605 "enable_placement_id": 0, 00:19:21.605 "enable_zerocopy_send_server": true, 00:19:21.605 "enable_zerocopy_send_client": false, 00:19:21.605 "zerocopy_threshold": 0, 00:19:21.605 "tls_version": 0, 00:19:21.605 "enable_ktls": false 00:19:21.605 } 00:19:21.605 } 00:19:21.605 ] 00:19:21.605 }, 00:19:21.605 { 00:19:21.605 "subsystem": "vmd", 00:19:21.605 "config": [] 00:19:21.605 }, 00:19:21.605 { 00:19:21.605 "subsystem": "accel", 00:19:21.605 "config": [ 00:19:21.605 { 00:19:21.605 "method": "accel_set_options", 00:19:21.605 "params": { 00:19:21.605 "small_cache_size": 128, 00:19:21.605 "large_cache_size": 16, 00:19:21.605 "task_count": 2048, 00:19:21.605 "sequence_count": 2048, 00:19:21.605 "buf_count": 2048 00:19:21.605 } 00:19:21.605 } 00:19:21.605 ] 00:19:21.605 }, 00:19:21.605 { 00:19:21.605 "subsystem": "bdev", 00:19:21.605 "config": [ 00:19:21.605 { 00:19:21.605 "method": "bdev_set_options", 00:19:21.605 "params": { 00:19:21.605 "bdev_io_pool_size": 65535, 00:19:21.605 "bdev_io_cache_size": 256, 00:19:21.605 "bdev_auto_examine": true, 00:19:21.605 "iobuf_small_cache_size": 128, 00:19:21.605 "iobuf_large_cache_size": 16 00:19:21.605 } 00:19:21.605 }, 00:19:21.605 { 00:19:21.605 "method": "bdev_raid_set_options", 00:19:21.605 "params": { 00:19:21.605 "process_window_size_kb": 1024 00:19:21.605 } 00:19:21.605 }, 00:19:21.605 { 00:19:21.605 "method": "bdev_iscsi_set_options", 00:19:21.605 "params": { 00:19:21.605 "timeout_sec": 30 00:19:21.605 } 00:19:21.605 }, 00:19:21.605 { 00:19:21.605 "method": "bdev_nvme_set_options", 00:19:21.605 "params": { 00:19:21.605 "action_on_timeout": "none", 00:19:21.605 "timeout_us": 0, 00:19:21.605 "timeout_admin_us": 0, 00:19:21.605 "keep_alive_timeout_ms": 10000, 00:19:21.605 "arbitration_burst": 0, 00:19:21.605 "low_priority_weight": 0, 00:19:21.605 "medium_priority_weight": 0, 00:19:21.605 "high_priority_weight": 0, 00:19:21.605 "nvme_adminq_poll_period_us": 10000, 00:19:21.605 "nvme_ioq_poll_period_us": 0, 00:19:21.605 "io_queue_requests": 0, 00:19:21.605 "delay_cmd_submit": true, 00:19:21.605 "transport_retry_count": 4, 00:19:21.605 "bdev_retry_count": 3, 00:19:21.605 "transport_ack_timeout": 0, 00:19:21.605 "ctrlr_loss_timeout_sec": 0, 00:19:21.605 "reconnect_delay_sec": 0, 00:19:21.605 "fast_io_fail_timeout_sec": 0, 00:19:21.605 "disable_auto_failback": false, 00:19:21.605 "generate_uuids": false, 00:19:21.606 "transport_tos": 0, 00:19:21.606 "nvme_error_stat": false, 00:19:21.606 "rdma_srq_size": 0, 00:19:21.606 "io_path_stat": false, 00:19:21.606 "allow_accel_sequence": false, 00:19:21.606 "rdma_max_cq_size": 0, 00:19:21.606 "rdma_cm_event_timeout_ms": 0, 00:19:21.606 "dhchap_digests": [ 00:19:21.606 "sha256", 00:19:21.606 "sha384", 00:19:21.606 "sha512" 00:19:21.606 ], 00:19:21.606 "dhchap_dhgroups": [ 00:19:21.606 "null", 00:19:21.606 "ffdhe2048", 00:19:21.606 "ffdhe3072", 00:19:21.606 "ffdhe4096", 00:19:21.606 "ffdhe6144", 00:19:21.606 "ffdhe8192" 00:19:21.606 ] 00:19:21.606 } 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "method": "bdev_nvme_set_hotplug", 00:19:21.606 "params": { 00:19:21.606 "period_us": 100000, 00:19:21.606 "enable": false 00:19:21.606 } 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "method": "bdev_malloc_create", 00:19:21.606 "params": { 00:19:21.606 "name": "malloc0", 00:19:21.606 "num_blocks": 8192, 00:19:21.606 "block_size": 4096, 00:19:21.606 "physical_block_size": 4096, 00:19:21.606 "uuid": "86171f0f-feda-4383-b2de-6fd40d60c51b", 00:19:21.606 "optimal_io_boundary": 0 00:19:21.606 } 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "method": "bdev_wait_for_examine" 00:19:21.606 } 00:19:21.606 ] 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "subsystem": "nbd", 00:19:21.606 "config": [] 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "subsystem": "scheduler", 00:19:21.606 "config": [ 00:19:21.606 { 00:19:21.606 "method": "framework_set_scheduler", 00:19:21.606 "params": { 00:19:21.606 "name": "static" 00:19:21.606 } 00:19:21.606 } 00:19:21.606 ] 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "subsystem": "nvmf", 00:19:21.606 "config": [ 00:19:21.606 { 00:19:21.606 "method": "nvmf_set_config", 00:19:21.606 "params": { 00:19:21.606 "discovery_filter": "match_any", 00:19:21.606 "admin_cmd_passthru": { 00:19:21.606 "identify_ctrlr": false 00:19:21.606 } 00:19:21.606 } 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "method": "nvmf_set_max_subsystems", 00:19:21.606 "params": { 00:19:21.606 "max_subsystems": 1024 00:19:21.606 } 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "method": "nvmf_set_crdt", 00:19:21.606 "params": { 00:19:21.606 "crdt1": 0, 00:19:21.606 "crdt2": 0, 00:19:21.606 "crdt3": 0 00:19:21.606 } 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "method": "nvmf_create_transport", 00:19:21.606 "params": { 00:19:21.606 "trtype": "TCP", 00:19:21.606 "max_queue_depth": 128, 00:19:21.606 "max_io_qpairs_per_ctrlr": 127, 00:19:21.606 "in_capsule_data_size": 4096, 00:19:21.606 "max_io_size": 131072, 00:19:21.606 "io_unit_size": 131072, 00:19:21.606 "max_aq_depth": 128, 00:19:21.606 "num_shared_buffers": 511, 00:19:21.606 "buf_cache_size": 4294967295, 00:19:21.606 "dif_insert_or_strip": false, 00:19:21.606 "zcopy": false, 00:19:21.606 "c2h_success": false, 00:19:21.606 "sock_priority": 0, 00:19:21.606 "abort_timeout_sec": 1, 00:19:21.606 "ack_timeout": 0, 00:19:21.606 "data_wr_pool_size": 0 00:19:21.606 } 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "method": "nvmf_create_subsystem", 00:19:21.606 "params": { 00:19:21.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.606 "allow_any_host": false, 00:19:21.606 "serial_number": "00000000000000000000", 00:19:21.606 "model_number": "SPDK bdev Controller", 00:19:21.606 "max_namespaces": 32, 00:19:21.606 "min_cntlid": 1, 00:19:21.606 "max_cntlid": 65519, 00:19:21.606 "ana_reporting": false 00:19:21.606 } 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "method": "nvmf_subsystem_add_host", 00:19:21.606 "params": { 00:19:21.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.606 "host": "nqn.2016-06.io.spdk:host1", 00:19:21.606 "psk": "key0" 00:19:21.606 } 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "method": "nvmf_subsystem_add_ns", 00:19:21.606 "params": { 00:19:21.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.606 "namespace": { 00:19:21.606 "nsid": 1, 00:19:21.606 "bdev_name": "malloc0", 00:19:21.606 "nguid": "86171F0FFEDA4383B2DE6FD40D60C51B", 00:19:21.606 "uuid": "86171f0f-feda-4383-b2de-6fd40d60c51b", 00:19:21.606 "no_auto_visible": false 00:19:21.606 } 00:19:21.606 } 00:19:21.606 }, 00:19:21.606 { 00:19:21.606 "method": "nvmf_subsystem_add_listener", 00:19:21.606 "params": { 00:19:21.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.606 "listen_address": { 00:19:21.606 "trtype": "TCP", 00:19:21.606 "adrfam": "IPv4", 00:19:21.606 "traddr": "10.0.0.2", 00:19:21.606 "trsvcid": "4420" 00:19:21.606 }, 00:19:21.606 "secure_channel": true 00:19:21.606 } 00:19:21.606 } 00:19:21.606 ] 00:19:21.606 } 00:19:21.606 ] 00:19:21.606 }' 00:19:21.606 00:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:21.865 00:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:19:21.865 "subsystems": [ 00:19:21.865 { 00:19:21.865 "subsystem": "keyring", 00:19:21.865 "config": [ 00:19:21.865 { 00:19:21.865 "method": "keyring_file_add_key", 00:19:21.865 "params": { 00:19:21.865 "name": "key0", 00:19:21.865 "path": "/tmp/tmp.N6KAhddIiN" 00:19:21.865 } 00:19:21.865 } 00:19:21.865 ] 00:19:21.865 }, 00:19:21.865 { 00:19:21.865 "subsystem": "iobuf", 00:19:21.865 "config": [ 00:19:21.865 { 00:19:21.865 "method": "iobuf_set_options", 00:19:21.865 "params": { 00:19:21.865 "small_pool_count": 8192, 00:19:21.865 "large_pool_count": 1024, 00:19:21.865 "small_bufsize": 8192, 00:19:21.865 "large_bufsize": 135168 00:19:21.865 } 00:19:21.865 } 00:19:21.865 ] 00:19:21.865 }, 00:19:21.865 { 00:19:21.865 "subsystem": "sock", 00:19:21.865 "config": [ 00:19:21.865 { 00:19:21.865 "method": "sock_impl_set_options", 00:19:21.865 "params": { 00:19:21.865 "impl_name": "posix", 00:19:21.865 "recv_buf_size": 2097152, 00:19:21.865 "send_buf_size": 2097152, 00:19:21.865 "enable_recv_pipe": true, 00:19:21.865 "enable_quickack": false, 00:19:21.865 "enable_placement_id": 0, 00:19:21.865 "enable_zerocopy_send_server": true, 00:19:21.865 "enable_zerocopy_send_client": false, 00:19:21.865 "zerocopy_threshold": 0, 00:19:21.865 "tls_version": 0, 00:19:21.865 "enable_ktls": false 00:19:21.865 } 00:19:21.865 }, 00:19:21.865 { 00:19:21.865 "method": "sock_impl_set_options", 00:19:21.865 "params": { 00:19:21.865 "impl_name": "ssl", 00:19:21.865 "recv_buf_size": 4096, 00:19:21.865 "send_buf_size": 4096, 00:19:21.865 "enable_recv_pipe": true, 00:19:21.865 "enable_quickack": false, 00:19:21.865 "enable_placement_id": 0, 00:19:21.865 "enable_zerocopy_send_server": true, 00:19:21.865 "enable_zerocopy_send_client": false, 00:19:21.865 "zerocopy_threshold": 0, 00:19:21.865 "tls_version": 0, 00:19:21.865 "enable_ktls": false 00:19:21.865 } 00:19:21.865 } 00:19:21.865 ] 00:19:21.865 }, 00:19:21.865 { 00:19:21.865 "subsystem": "vmd", 00:19:21.865 "config": [] 00:19:21.865 }, 00:19:21.865 { 00:19:21.865 "subsystem": "accel", 00:19:21.865 "config": [ 00:19:21.865 { 00:19:21.865 "method": "accel_set_options", 00:19:21.865 "params": { 00:19:21.865 "small_cache_size": 128, 00:19:21.865 "large_cache_size": 16, 00:19:21.865 "task_count": 2048, 00:19:21.865 "sequence_count": 2048, 00:19:21.865 "buf_count": 2048 00:19:21.865 } 00:19:21.865 } 00:19:21.865 ] 00:19:21.865 }, 00:19:21.865 { 00:19:21.865 "subsystem": "bdev", 00:19:21.865 "config": [ 00:19:21.865 { 00:19:21.865 "method": "bdev_set_options", 00:19:21.865 "params": { 00:19:21.865 "bdev_io_pool_size": 65535, 00:19:21.865 "bdev_io_cache_size": 256, 00:19:21.865 "bdev_auto_examine": true, 00:19:21.865 "iobuf_small_cache_size": 128, 00:19:21.865 "iobuf_large_cache_size": 16 00:19:21.865 } 00:19:21.865 }, 00:19:21.865 { 00:19:21.865 "method": "bdev_raid_set_options", 00:19:21.865 "params": { 00:19:21.865 "process_window_size_kb": 1024 00:19:21.865 } 00:19:21.865 }, 00:19:21.865 { 00:19:21.865 "method": "bdev_iscsi_set_options", 00:19:21.865 "params": { 00:19:21.865 "timeout_sec": 30 00:19:21.865 } 00:19:21.865 }, 00:19:21.865 { 00:19:21.865 "method": "bdev_nvme_set_options", 00:19:21.865 "params": { 00:19:21.865 "action_on_timeout": "none", 00:19:21.865 "timeout_us": 0, 00:19:21.865 "timeout_admin_us": 0, 00:19:21.865 "keep_alive_timeout_ms": 10000, 00:19:21.865 "arbitration_burst": 0, 00:19:21.865 "low_priority_weight": 0, 00:19:21.865 "medium_priority_weight": 0, 00:19:21.865 "high_priority_weight": 0, 00:19:21.865 "nvme_adminq_poll_period_us": 10000, 00:19:21.865 "nvme_ioq_poll_period_us": 0, 00:19:21.865 "io_queue_requests": 512, 00:19:21.865 "delay_cmd_submit": true, 00:19:21.865 "transport_retry_count": 4, 00:19:21.865 "bdev_retry_count": 3, 00:19:21.865 "transport_ack_timeout": 0, 00:19:21.865 "ctrlr_loss_timeout_sec": 0, 00:19:21.865 "reconnect_delay_sec": 0, 00:19:21.865 "fast_io_fail_timeout_sec": 0, 00:19:21.865 "disable_auto_failback": false, 00:19:21.865 "generate_uuids": false, 00:19:21.865 "transport_tos": 0, 00:19:21.865 "nvme_error_stat": false, 00:19:21.865 "rdma_srq_size": 0, 00:19:21.865 "io_path_stat": false, 00:19:21.865 "allow_accel_sequence": false, 00:19:21.865 "rdma_max_cq_size": 0, 00:19:21.865 "rdma_cm_event_timeout_ms": 0, 00:19:21.865 "dhchap_digests": [ 00:19:21.865 "sha256", 00:19:21.865 "sha384", 00:19:21.865 "sha512" 00:19:21.865 ], 00:19:21.865 "dhchap_dhgroups": [ 00:19:21.865 "null", 00:19:21.865 "ffdhe2048", 00:19:21.865 "ffdhe3072", 00:19:21.865 "ffdhe4096", 00:19:21.865 "ffdhe6144", 00:19:21.865 "ffdhe8192" 00:19:21.865 ] 00:19:21.865 } 00:19:21.865 }, 00:19:21.865 { 00:19:21.865 "method": "bdev_nvme_attach_controller", 00:19:21.865 "params": { 00:19:21.865 "name": "nvme0", 00:19:21.865 "trtype": "TCP", 00:19:21.865 "adrfam": "IPv4", 00:19:21.865 "traddr": "10.0.0.2", 00:19:21.865 "trsvcid": "4420", 00:19:21.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.865 "prchk_reftag": false, 00:19:21.865 "prchk_guard": false, 00:19:21.865 "ctrlr_loss_timeout_sec": 0, 00:19:21.866 "reconnect_delay_sec": 0, 00:19:21.866 "fast_io_fail_timeout_sec": 0, 00:19:21.866 "psk": "key0", 00:19:21.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.866 "hdgst": false, 00:19:21.866 "ddgst": false 00:19:21.866 } 00:19:21.866 }, 00:19:21.866 { 00:19:21.866 "method": "bdev_nvme_set_hotplug", 00:19:21.866 "params": { 00:19:21.866 "period_us": 100000, 00:19:21.866 "enable": false 00:19:21.866 } 00:19:21.866 }, 00:19:21.866 { 00:19:21.866 "method": "bdev_enable_histogram", 00:19:21.866 "params": { 00:19:21.866 "name": "nvme0n1", 00:19:21.866 "enable": true 00:19:21.866 } 00:19:21.866 }, 00:19:21.866 { 00:19:21.866 "method": "bdev_wait_for_examine" 00:19:21.866 } 00:19:21.866 ] 00:19:21.866 }, 00:19:21.866 { 00:19:21.866 "subsystem": "nbd", 00:19:21.866 "config": [] 00:19:21.866 } 00:19:21.866 ] 00:19:21.866 }' 00:19:21.866 00:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 908185 00:19:21.866 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 908185 ']' 00:19:21.866 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 908185 00:19:21.866 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:19:21.866 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:21.866 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 908185 00:19:21.866 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:19:21.866 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:19:21.866 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 908185' 00:19:21.866 killing process with pid 908185 00:19:21.866 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 908185 00:19:21.866 Received shutdown signal, test time was about 1.000000 seconds 00:19:21.866 00:19:21.866 Latency(us) 00:19:21.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.866 =================================================================================================================== 00:19:21.866 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.866 00:34:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 908185 00:19:22.124 00:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 908058 00:19:22.124 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 908058 ']' 00:19:22.124 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 908058 00:19:22.124 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:19:22.124 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:22.124 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 908058 00:19:22.124 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:22.124 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:22.124 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 908058' 00:19:22.124 killing process with pid 908058 00:19:22.124 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 908058 00:19:22.124 [2024-05-15 00:34:48.250705] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:22.124 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 908058 00:19:22.691 00:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:22.691 00:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.691 00:34:48 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:19:22.691 "subsystems": [ 00:19:22.691 { 00:19:22.691 "subsystem": "keyring", 00:19:22.691 "config": [ 00:19:22.691 { 00:19:22.691 "method": "keyring_file_add_key", 00:19:22.691 "params": { 00:19:22.691 "name": "key0", 00:19:22.691 "path": "/tmp/tmp.N6KAhddIiN" 00:19:22.691 } 00:19:22.691 } 00:19:22.691 ] 00:19:22.691 }, 00:19:22.691 { 00:19:22.691 "subsystem": "iobuf", 00:19:22.691 "config": [ 00:19:22.691 { 00:19:22.691 "method": "iobuf_set_options", 00:19:22.691 "params": { 00:19:22.691 "small_pool_count": 8192, 00:19:22.691 "large_pool_count": 1024, 00:19:22.691 "small_bufsize": 8192, 00:19:22.691 "large_bufsize": 135168 00:19:22.691 } 00:19:22.691 } 00:19:22.691 ] 00:19:22.691 }, 00:19:22.691 { 00:19:22.691 "subsystem": "sock", 00:19:22.691 "config": [ 00:19:22.691 { 00:19:22.691 "method": "sock_impl_set_options", 00:19:22.691 "params": { 00:19:22.691 "impl_name": "posix", 00:19:22.691 "recv_buf_size": 2097152, 00:19:22.691 "send_buf_size": 2097152, 00:19:22.691 "enable_recv_pipe": true, 00:19:22.691 "enable_quickack": false, 00:19:22.691 "enable_placement_id": 0, 00:19:22.691 "enable_zerocopy_send_server": true, 00:19:22.691 "enable_zerocopy_send_client": false, 00:19:22.691 "zerocopy_threshold": 0, 00:19:22.691 "tls_version": 0, 00:19:22.691 "enable_ktls": false 00:19:22.691 } 00:19:22.691 }, 00:19:22.691 { 00:19:22.691 "method": "sock_impl_set_options", 00:19:22.691 "params": { 00:19:22.691 "impl_name": "ssl", 00:19:22.691 "recv_buf_size": 4096, 00:19:22.691 "send_buf_size": 4096, 00:19:22.691 "enable_recv_pipe": true, 00:19:22.691 "enable_quickack": false, 00:19:22.691 "enable_placement_id": 0, 00:19:22.691 "enable_zerocopy_send_server": true, 00:19:22.691 "enable_zerocopy_send_client": false, 00:19:22.691 "zerocopy_threshold": 0, 00:19:22.691 "tls_version": 0, 00:19:22.691 "enable_ktls": false 00:19:22.691 } 00:19:22.691 } 00:19:22.691 ] 00:19:22.691 }, 00:19:22.691 { 00:19:22.691 "subsystem": "vmd", 00:19:22.691 "config": [] 00:19:22.691 }, 00:19:22.691 { 00:19:22.691 "subsystem": "accel", 00:19:22.691 "config": [ 00:19:22.691 { 00:19:22.691 "method": "accel_set_options", 00:19:22.691 "params": { 00:19:22.691 "small_cache_size": 128, 00:19:22.691 "large_cache_size": 16, 00:19:22.691 "task_count": 2048, 00:19:22.691 "sequence_count": 2048, 00:19:22.691 "buf_count": 2048 00:19:22.691 } 00:19:22.691 } 00:19:22.691 ] 00:19:22.691 }, 00:19:22.691 { 00:19:22.691 "subsystem": "bdev", 00:19:22.691 "config": [ 00:19:22.691 { 00:19:22.691 "method": "bdev_set_options", 00:19:22.691 "params": { 00:19:22.691 "bdev_io_pool_size": 65535, 00:19:22.691 "bdev_io_cache_size": 256, 00:19:22.691 "bdev_auto_examine": true, 00:19:22.691 "iobuf_small_cache_size": 128, 00:19:22.692 "iobuf_large_cache_size": 16 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "bdev_raid_set_options", 00:19:22.692 "params": { 00:19:22.692 "process_window_size_kb": 1024 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "bdev_iscsi_set_options", 00:19:22.692 "params": { 00:19:22.692 "timeout_sec": 30 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "bdev_nvme_set_options", 00:19:22.692 "params": { 00:19:22.692 "action_on_timeout": "none", 00:19:22.692 "timeout_us": 0, 00:19:22.692 "timeout_admin_us": 0, 00:19:22.692 "keep_alive_timeout_ms": 10000, 00:19:22.692 "arbitration_burst": 0, 00:19:22.692 "low_priority_weight": 0, 00:19:22.692 "medium_priority_weight": 0, 00:19:22.692 "high_priority_weight": 0, 00:19:22.692 "nvme_adminq_poll_period_us": 10000, 00:19:22.692 "nvme_ioq_poll_period_us": 0, 00:19:22.692 "io_queue_requests": 0, 00:19:22.692 "delay_cmd_submit": true, 00:19:22.692 "transport_retry_count": 4, 00:19:22.692 "bdev_retry_count": 3, 00:19:22.692 "transport_ack_timeout": 0, 00:19:22.692 "ctrlr_loss_timeout_sec": 0, 00:19:22.692 "reconnect_delay_sec": 0, 00:19:22.692 "fast_io_fail_timeout_sec": 0, 00:19:22.692 "disable_auto_failback": false, 00:19:22.692 "generate_uuids": false, 00:19:22.692 "transport_tos": 0, 00:19:22.692 "nvme_error_stat": false, 00:19:22.692 "rdma_srq_size": 0, 00:19:22.692 "io_path_stat": false, 00:19:22.692 "allow_accel_sequence": false, 00:19:22.692 "rdma_max_cq_size": 0, 00:19:22.692 "rdma_cm_event_timeout_ms": 0, 00:19:22.692 "dhchap_digests": [ 00:19:22.692 "sha256", 00:19:22.692 "sha384", 00:19:22.692 "sha512" 00:19:22.692 ], 00:19:22.692 "dhchap_dhgroups": [ 00:19:22.692 "null", 00:19:22.692 "ffdhe2048", 00:19:22.692 "ffdhe3072", 00:19:22.692 "ffdhe4096", 00:19:22.692 "ffdhe6144", 00:19:22.692 "ffdhe8192" 00:19:22.692 ] 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "bdev_nvme_set_hotplug", 00:19:22.692 "params": { 00:19:22.692 "period_us": 100000, 00:19:22.692 "enable": false 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "bdev_malloc_create", 00:19:22.692 "params": { 00:19:22.692 "name": "malloc0", 00:19:22.692 "num_blocks": 8192, 00:19:22.692 "block_size": 4096, 00:19:22.692 "physical_block_size": 4096, 00:19:22.692 "uuid": "86171f0f-feda-4383-b2de-6fd40d60c51b", 00:19:22.692 "optimal_io_boundary": 0 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "bdev_wait_for_examine" 00:19:22.692 } 00:19:22.692 ] 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "subsystem": "nbd", 00:19:22.692 "config": [] 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "subsystem": "scheduler", 00:19:22.692 "config": [ 00:19:22.692 { 00:19:22.692 "method": "framework_set_scheduler", 00:19:22.692 "params": { 00:19:22.692 "name": "static" 00:19:22.692 } 00:19:22.692 } 00:19:22.692 ] 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "subsystem": "nvmf", 00:19:22.692 "config": [ 00:19:22.692 { 00:19:22.692 "method": "nvmf_set_config", 00:19:22.692 "params": { 00:19:22.692 "discovery_filter": "match_any", 00:19:22.692 "admin_cmd_passthru": { 00:19:22.692 "identify_ctrlr": false 00:19:22.692 } 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "nvmf_set_max_subsystems", 00:19:22.692 "params": { 00:19:22.692 "max_subsystems": 1024 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "nvmf_set_crdt", 00:19:22.692 "params": { 00:19:22.692 "crdt1": 0, 00:19:22.692 "crdt2": 0, 00:19:22.692 "crdt3": 0 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "nvmf_create_transport", 00:19:22.692 "params": { 00:19:22.692 "trtype": "TCP", 00:19:22.692 "max_queue_depth": 128, 00:19:22.692 "max_io_qpairs_per_ctrlr": 127, 00:19:22.692 "in_capsule_data_size": 4096, 00:19:22.692 "max_io_size": 131072, 00:19:22.692 "io_unit_size": 131072, 00:19:22.692 "max_aq_depth": 128, 00:19:22.692 "num_shared_buffers": 511, 00:19:22.692 "buf_cache_size": 4294967295, 00:19:22.692 "dif_insert_or_strip": false, 00:19:22.692 "zcopy": false, 00:19:22.692 "c2h_success": false, 00:19:22.692 "sock_priority": 0, 00:19:22.692 "abort_timeout_sec": 1, 00:19:22.692 "ack_timeout": 0, 00:19:22.692 "data_wr_pool_size": 0 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "nvmf_create_subsystem", 00:19:22.692 "params": { 00:19:22.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.692 "allow_any_host": false, 00:19:22.692 "serial_number": "00000000000000000000", 00:19:22.692 "model_number": "SPDK bdev Controller", 00:19:22.692 "max_namespaces": 32, 00:19:22.692 "min_cntlid": 1, 00:19:22.692 "max_cntlid": 65519, 00:19:22.692 "ana_reporting": false 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "nvmf_subsystem_add_host", 00:19:22.692 "params": { 00:19:22.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.692 "host": "nqn.2016-06.io.spdk:host1", 00:19:22.692 "psk": "key0" 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "nvmf_subsystem_add_ns", 00:19:22.692 "params": { 00:19:22.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.692 "namespace": { 00:19:22.692 "nsid": 1, 00:19:22.692 "bdev_name": "malloc0", 00:19:22.692 "nguid": "86171F0FFEDA4383B2DE6FD40D60C51B", 00:19:22.692 "uuid": "86171f0f-feda-4383-b2de-6fd40d60c51b", 00:19:22.692 "no_auto_visible": false 00:19:22.692 } 00:19:22.692 } 00:19:22.692 }, 00:19:22.692 { 00:19:22.692 "method": "nvmf_subsystem_add_listener", 00:19:22.692 "params": { 00:19:22.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.692 "listen_address": { 00:19:22.692 "trtype": "TCP", 00:19:22.692 "adrfam": "IPv4", 00:19:22.692 "traddr": "10.0.0.2", 00:19:22.692 "trsvcid": "4420" 00:19:22.692 }, 00:19:22.692 "secure_channel": true 00:19:22.692 } 00:19:22.692 } 00:19:22.692 ] 00:19:22.692 } 00:19:22.692 ] 00:19:22.692 }' 00:19:22.692 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:22.692 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.692 00:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=908626 00:19:22.692 00:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:22.692 00:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 908626 00:19:22.692 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 908626 ']' 00:19:22.692 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.692 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:22.692 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.692 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:22.692 00:34:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.692 [2024-05-15 00:34:48.603575] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:22.692 [2024-05-15 00:34:48.603667] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.692 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.692 [2024-05-15 00:34:48.682991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.692 [2024-05-15 00:34:48.795615] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.692 [2024-05-15 00:34:48.795688] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.692 [2024-05-15 00:34:48.795714] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.692 [2024-05-15 00:34:48.795736] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.692 [2024-05-15 00:34:48.795753] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.692 [2024-05-15 00:34:48.795867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.950 [2024-05-15 00:34:49.032794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.950 [2024-05-15 00:34:49.064782] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:22.950 [2024-05-15 00:34:49.064854] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:22.950 [2024-05-15 00:34:49.073089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=908778 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 908778 /var/tmp/bdevperf.sock 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 908778 ']' 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.516 00:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:19:23.516 "subsystems": [ 00:19:23.516 { 00:19:23.516 "subsystem": "keyring", 00:19:23.516 "config": [ 00:19:23.516 { 00:19:23.516 "method": "keyring_file_add_key", 00:19:23.516 "params": { 00:19:23.516 "name": "key0", 00:19:23.516 "path": "/tmp/tmp.N6KAhddIiN" 00:19:23.516 } 00:19:23.516 } 00:19:23.516 ] 00:19:23.516 }, 00:19:23.516 { 00:19:23.516 "subsystem": "iobuf", 00:19:23.516 "config": [ 00:19:23.516 { 00:19:23.516 "method": "iobuf_set_options", 00:19:23.516 "params": { 00:19:23.516 "small_pool_count": 8192, 00:19:23.516 "large_pool_count": 1024, 00:19:23.516 "small_bufsize": 8192, 00:19:23.516 "large_bufsize": 135168 00:19:23.516 } 00:19:23.516 } 00:19:23.516 ] 00:19:23.516 }, 00:19:23.516 { 00:19:23.516 "subsystem": "sock", 00:19:23.516 "config": [ 00:19:23.516 { 00:19:23.516 "method": "sock_impl_set_options", 00:19:23.516 "params": { 00:19:23.516 "impl_name": "posix", 00:19:23.516 "recv_buf_size": 2097152, 00:19:23.516 "send_buf_size": 2097152, 00:19:23.516 "enable_recv_pipe": true, 00:19:23.516 "enable_quickack": false, 00:19:23.516 "enable_placement_id": 0, 00:19:23.516 "enable_zerocopy_send_server": true, 00:19:23.516 "enable_zerocopy_send_client": false, 00:19:23.516 "zerocopy_threshold": 0, 00:19:23.516 "tls_version": 0, 00:19:23.516 "enable_ktls": false 00:19:23.516 } 00:19:23.516 }, 00:19:23.516 { 00:19:23.516 "method": "sock_impl_set_options", 00:19:23.516 "params": { 00:19:23.517 "impl_name": "ssl", 00:19:23.517 "recv_buf_size": 4096, 00:19:23.517 "send_buf_size": 4096, 00:19:23.517 "enable_recv_pipe": true, 00:19:23.517 "enable_quickack": false, 00:19:23.517 "enable_placement_id": 0, 00:19:23.517 "enable_zerocopy_send_server": true, 00:19:23.517 "enable_zerocopy_send_client": false, 00:19:23.517 "zerocopy_threshold": 0, 00:19:23.517 "tls_version": 0, 00:19:23.517 "enable_ktls": false 00:19:23.517 } 00:19:23.517 } 00:19:23.517 ] 00:19:23.517 }, 00:19:23.517 { 00:19:23.517 "subsystem": "vmd", 00:19:23.517 "config": [] 00:19:23.517 }, 00:19:23.517 { 00:19:23.517 "subsystem": "accel", 00:19:23.517 "config": [ 00:19:23.517 { 00:19:23.517 "method": "accel_set_options", 00:19:23.517 "params": { 00:19:23.517 "small_cache_size": 128, 00:19:23.517 "large_cache_size": 16, 00:19:23.517 "task_count": 2048, 00:19:23.517 "sequence_count": 2048, 00:19:23.517 "buf_count": 2048 00:19:23.517 } 00:19:23.517 } 00:19:23.517 ] 00:19:23.517 }, 00:19:23.517 { 00:19:23.517 "subsystem": "bdev", 00:19:23.517 "config": [ 00:19:23.517 { 00:19:23.517 "method": "bdev_set_options", 00:19:23.517 "params": { 00:19:23.517 "bdev_io_pool_size": 65535, 00:19:23.517 "bdev_io_cache_size": 256, 00:19:23.517 "bdev_auto_examine": true, 00:19:23.517 "iobuf_small_cache_size": 128, 00:19:23.517 "iobuf_large_cache_size": 16 00:19:23.517 } 00:19:23.517 }, 00:19:23.517 { 00:19:23.517 "method": "bdev_raid_set_options", 00:19:23.517 "params": { 00:19:23.517 "process_window_size_kb": 1024 00:19:23.517 } 00:19:23.517 }, 00:19:23.517 { 00:19:23.517 "method": "bdev_iscsi_set_options", 00:19:23.517 "params": { 00:19:23.517 "timeout_sec": 30 00:19:23.517 } 00:19:23.517 }, 00:19:23.517 { 00:19:23.517 "method": "bdev_nvme_set_options", 00:19:23.517 "params": { 00:19:23.517 "action_on_timeout": "none", 00:19:23.517 "timeout_us": 0, 00:19:23.517 "timeout_admin_us": 0, 00:19:23.517 "keep_alive_timeout_ms": 10000, 00:19:23.517 "arbitration_burst": 0, 00:19:23.517 "low_priority_weight": 0, 00:19:23.517 "medium_priority_weight": 0, 00:19:23.517 "high_priority_weight": 0, 00:19:23.517 "nvme_adminq_poll_period_us": 10000, 00:19:23.517 "nvme_ioq_poll_period_us": 0, 00:19:23.517 "io_queue_requests": 512, 00:19:23.517 "delay_cmd_submit": true, 00:19:23.517 "transport_retry_count": 4, 00:19:23.517 "bdev_retry_count": 3, 00:19:23.517 "transport_ack_timeout": 0, 00:19:23.517 "ctrlr_loss_timeout_sec": 0, 00:19:23.517 "reconnect_delay_sec": 0, 00:19:23.517 "fast_io_fail_timeout_sec": 0, 00:19:23.517 "disable_auto_failback": false, 00:19:23.517 "generate_uuids": false, 00:19:23.517 "transport_tos": 0, 00:19:23.517 "nvme_error_stat": false, 00:19:23.517 "rdma_srq_size": 0, 00:19:23.517 "io_path_stat": false, 00:19:23.517 "allow_accel_sequence": false, 00:19:23.517 "rdma_max_cq_size": 0, 00:19:23.517 "rdma_cm_event_timeout_ms": 0, 00:19:23.517 "dhchap_digests": [ 00:19:23.517 "sha256", 00:19:23.517 "sha384", 00:19:23.517 "sha512" 00:19:23.517 ], 00:19:23.517 "dhchap_dhgroups": [ 00:19:23.517 "null", 00:19:23.517 "ffdhe2048", 00:19:23.517 "ffdhe3072", 00:19:23.517 "ffdhe4096", 00:19:23.517 "ffdhe6144", 00:19:23.517 "ffdhe8192" 00:19:23.517 ] 00:19:23.517 } 00:19:23.517 }, 00:19:23.517 { 00:19:23.517 "method": "bdev_nvme_attach_controller", 00:19:23.517 "params": { 00:19:23.517 "name": "nvme0", 00:19:23.517 "trtype": "TCP", 00:19:23.517 "adrfam": "IPv4", 00:19:23.517 "traddr": "10.0.0.2", 00:19:23.517 "trsvcid": "4420", 00:19:23.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.517 "prchk_reftag": false, 00:19:23.517 "prchk_guard": false, 00:19:23.517 "ctrlr_loss_timeout_sec": 0, 00:19:23.517 "reconnect_delay_sec": 0, 00:19:23.517 "fast_io_fail_timeout_sec": 0, 00:19:23.517 "psk": "key0", 00:19:23.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.517 "hdgst": false, 00:19:23.517 "ddgst": false 00:19:23.517 } 00:19:23.517 }, 00:19:23.517 { 00:19:23.517 "method": "bdev_nvme_set_hotplug", 00:19:23.517 "params": { 00:19:23.517 "period_us": 100000, 00:19:23.517 "enable": false 00:19:23.517 } 00:19:23.517 }, 00:19:23.517 { 00:19:23.517 "method": "bdev_enable_histogram", 00:19:23.517 "params": { 00:19:23.517 "name": "nvme0n1", 00:19:23.517 "enable": true 00:19:23.517 } 00:19:23.517 }, 00:19:23.517 { 00:19:23.517 "method": "bdev_wait_for_examine" 00:19:23.517 } 00:19:23.517 ] 00:19:23.517 }, 00:19:23.517 { 00:19:23.517 "subsystem": "nbd", 00:19:23.517 "config": [] 00:19:23.517 } 00:19:23.517 ] 00:19:23.517 }' 00:19:23.517 00:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:23.517 00:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.517 [2024-05-15 00:34:49.654923] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:23.517 [2024-05-15 00:34:49.655015] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908778 ] 00:19:23.775 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.775 [2024-05-15 00:34:49.726899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.775 [2024-05-15 00:34:49.842378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.034 [2024-05-15 00:34:50.022107] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.599 00:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:24.599 00:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:24.599 00:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:24.599 00:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:24.857 00:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.857 00:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:24.857 Running I/O for 1 seconds... 00:19:26.232 00:19:26.232 Latency(us) 00:19:26.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.232 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:26.232 Verification LBA range: start 0x0 length 0x2000 00:19:26.232 nvme0n1 : 1.08 1347.57 5.26 0.00 0.00 92432.48 6941.96 127382.57 00:19:26.232 =================================================================================================================== 00:19:26.232 Total : 1347.57 5.26 0.00 0.00 92432.48 6941.96 127382.57 00:19:26.232 0 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # type=--id 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # id=0 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # for n in $shm_files 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:26.232 nvmf_trace.0 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # return 0 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 908778 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 908778 ']' 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 908778 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 908778 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 908778' 00:19:26.232 killing process with pid 908778 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 908778 00:19:26.232 Received shutdown signal, test time was about 1.000000 seconds 00:19:26.232 00:19:26.232 Latency(us) 00:19:26.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.232 =================================================================================================================== 00:19:26.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.232 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 908778 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:26.491 rmmod nvme_tcp 00:19:26.491 rmmod nvme_fabrics 00:19:26.491 rmmod nvme_keyring 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 908626 ']' 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 908626 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 908626 ']' 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 908626 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 908626 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 908626' 00:19:26.491 killing process with pid 908626 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 908626 00:19:26.491 [2024-05-15 00:34:52.495927] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:26.491 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 908626 00:19:26.748 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:26.748 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:26.748 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:26.748 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.748 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.748 00:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.749 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.749 00:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.702 00:34:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:28.702 00:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.82GR4ukP44 /tmp/tmp.skwLHz1jhN /tmp/tmp.N6KAhddIiN 00:19:28.702 00:19:28.702 real 1m25.301s 00:19:28.702 user 2m9.141s 00:19:28.702 sys 0m27.661s 00:19:28.702 00:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:28.702 00:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.702 ************************************ 00:19:28.702 END TEST nvmf_tls 00:19:28.702 ************************************ 00:19:28.702 00:34:54 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:28.702 00:34:54 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:28.702 00:34:54 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:28.702 00:34:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.961 ************************************ 00:19:28.961 START TEST nvmf_fips 00:19:28.961 ************************************ 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:28.961 * Looking for test storage... 00:19:28.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:28.961 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:28.962 00:34:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:19:28.962 Error setting digest 00:19:28.962 0032DC7C757F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:28.962 0032DC7C757F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.962 00:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:31.492 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:31.492 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:31.492 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:31.492 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:31.492 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:31.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:19:31.751 00:19:31.751 --- 10.0.0.2 ping statistics --- 00:19:31.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.751 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:19:31.751 00:19:31.751 --- 10.0.0.1 ping statistics --- 00:19:31.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.751 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=911439 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 911439 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 911439 ']' 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:31.751 00:34:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:31.751 [2024-05-15 00:34:57.769607] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:31.751 [2024-05-15 00:34:57.769694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.751 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.751 [2024-05-15 00:34:57.843813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.009 [2024-05-15 00:34:57.951973] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.009 [2024-05-15 00:34:57.952020] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.009 [2024-05-15 00:34:57.952034] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.009 [2024-05-15 00:34:57.952044] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.009 [2024-05-15 00:34:57.952055] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.009 [2024-05-15 00:34:57.952081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:32.009 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.267 [2024-05-15 00:34:58.371743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.267 [2024-05-15 00:34:58.387698] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:32.267 [2024-05-15 00:34:58.387772] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.267 [2024-05-15 00:34:58.388001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.267 [2024-05-15 00:34:58.419544] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:32.267 malloc0 00:19:32.525 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.525 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=911582 00:19:32.525 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.525 00:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 911582 /var/tmp/bdevperf.sock 00:19:32.525 00:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 911582 ']' 00:19:32.525 00:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.525 00:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:32.525 00:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.525 00:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:32.525 00:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:32.525 [2024-05-15 00:34:58.512125] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:32.525 [2024-05-15 00:34:58.512212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911582 ] 00:19:32.525 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.525 [2024-05-15 00:34:58.579153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.525 [2024-05-15 00:34:58.687445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.458 00:34:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:33.458 00:34:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:19:33.458 00:34:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:33.715 [2024-05-15 00:34:59.662987] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.715 [2024-05-15 00:34:59.663095] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:33.715 TLSTESTn1 00:19:33.715 00:34:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:33.715 Running I/O for 10 seconds... 00:19:45.906 00:19:45.906 Latency(us) 00:19:45.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.906 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:45.906 Verification LBA range: start 0x0 length 0x2000 00:19:45.906 TLSTESTn1 : 10.07 1638.67 6.40 0.00 0.00 77849.69 5971.06 118061.89 00:19:45.906 =================================================================================================================== 00:19:45.906 Total : 1638.67 6.40 0.00 0.00 77849.69 5971.06 118061.89 00:19:45.906 0 00:19:45.906 00:35:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:45.906 00:35:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:45.906 00:35:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # type=--id 00:19:45.906 00:35:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # id=0 00:19:45.906 00:35:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:19:45.906 00:35:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:45.906 00:35:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:19:45.906 00:35:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:19:45.906 00:35:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # for n in $shm_files 00:19:45.906 00:35:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:45.906 nvmf_trace.0 00:19:45.906 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # return 0 00:19:45.906 00:35:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 911582 00:19:45.906 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 911582 ']' 00:19:45.906 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 911582 00:19:45.906 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:19:45.906 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:45.906 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 911582 00:19:45.906 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:19:45.906 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:19:45.906 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 911582' 00:19:45.906 killing process with pid 911582 00:19:45.906 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 911582 00:19:45.906 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.906 00:19:45.907 Latency(us) 00:19:45.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.907 =================================================================================================================== 00:19:45.907 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.907 [2024-05-15 00:35:10.055205] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 911582 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.907 rmmod nvme_tcp 00:19:45.907 rmmod nvme_fabrics 00:19:45.907 rmmod nvme_keyring 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 911439 ']' 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 911439 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 911439 ']' 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 911439 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 911439 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 911439' 00:19:45.907 killing process with pid 911439 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 911439 00:19:45.907 [2024-05-15 00:35:10.389477] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:45.907 [2024-05-15 00:35:10.389522] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 911439 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.907 00:35:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.844 00:35:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:46.844 00:35:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:46.844 00:19:46.844 real 0m17.872s 00:19:46.844 user 0m20.749s 00:19:46.844 sys 0m7.296s 00:19:46.844 00:35:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:46.844 00:35:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:46.844 ************************************ 00:19:46.844 END TEST nvmf_fips 00:19:46.844 ************************************ 00:19:46.844 00:35:12 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:46.844 00:35:12 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:46.844 00:35:12 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:46.844 00:35:12 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:46.844 00:35:12 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:46.844 00:35:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:49.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:49.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.373 00:35:15 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:49.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:49.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:49.374 00:35:15 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:49.374 00:35:15 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:49.374 00:35:15 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:49.374 00:35:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:49.374 ************************************ 00:19:49.374 START TEST nvmf_perf_adq 00:19:49.374 ************************************ 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:49.374 * Looking for test storage... 00:19:49.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:49.374 00:35:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.901 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:51.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:51.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:51.902 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:51.902 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:51.902 00:35:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:52.160 00:35:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:54.094 00:35:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:59.381 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:59.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:59.381 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:59.381 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:59.381 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:59.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:19:59.382 00:19:59.382 --- 10.0.0.2 ping statistics --- 00:19:59.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.382 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:19:59.382 00:19:59.382 --- 10.0.0.1 ping statistics --- 00:19:59.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.382 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=918035 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 918035 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 918035 ']' 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:59.382 00:35:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 [2024-05-15 00:35:24.965122] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:59.382 [2024-05-15 00:35:24.965202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.382 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.382 [2024-05-15 00:35:25.045183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.382 [2024-05-15 00:35:25.169903] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.382 [2024-05-15 00:35:25.169969] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.382 [2024-05-15 00:35:25.169987] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.382 [2024-05-15 00:35:25.170001] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.382 [2024-05-15 00:35:25.170012] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.382 [2024-05-15 00:35:25.173957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.382 [2024-05-15 00:35:25.174006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.382 [2024-05-15 00:35:25.174098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.382 [2024-05-15 00:35:25.174101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 [2024-05-15 00:35:25.397775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 Malloc1 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:59.382 [2024-05-15 00:35:25.448854] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:59.382 [2024-05-15 00:35:25.449189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=918072 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:59.382 00:35:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:59.382 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.913 00:35:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:01.913 00:35:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.913 00:35:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.913 00:35:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.913 00:35:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:01.913 "tick_rate": 2700000000, 00:20:01.913 "poll_groups": [ 00:20:01.913 { 00:20:01.913 "name": "nvmf_tgt_poll_group_000", 00:20:01.913 "admin_qpairs": 1, 00:20:01.913 "io_qpairs": 1, 00:20:01.913 "current_admin_qpairs": 1, 00:20:01.913 "current_io_qpairs": 1, 00:20:01.913 "pending_bdev_io": 0, 00:20:01.913 "completed_nvme_io": 18897, 00:20:01.913 "transports": [ 00:20:01.913 { 00:20:01.913 "trtype": "TCP" 00:20:01.913 } 00:20:01.913 ] 00:20:01.913 }, 00:20:01.913 { 00:20:01.913 "name": "nvmf_tgt_poll_group_001", 00:20:01.913 "admin_qpairs": 0, 00:20:01.913 "io_qpairs": 1, 00:20:01.913 "current_admin_qpairs": 0, 00:20:01.913 "current_io_qpairs": 1, 00:20:01.913 "pending_bdev_io": 0, 00:20:01.913 "completed_nvme_io": 20730, 00:20:01.913 "transports": [ 00:20:01.913 { 00:20:01.913 "trtype": "TCP" 00:20:01.913 } 00:20:01.913 ] 00:20:01.913 }, 00:20:01.913 { 00:20:01.913 "name": "nvmf_tgt_poll_group_002", 00:20:01.913 "admin_qpairs": 0, 00:20:01.913 "io_qpairs": 1, 00:20:01.913 "current_admin_qpairs": 0, 00:20:01.913 "current_io_qpairs": 1, 00:20:01.913 "pending_bdev_io": 0, 00:20:01.913 "completed_nvme_io": 20343, 00:20:01.913 "transports": [ 00:20:01.913 { 00:20:01.913 "trtype": "TCP" 00:20:01.913 } 00:20:01.913 ] 00:20:01.913 }, 00:20:01.913 { 00:20:01.913 "name": "nvmf_tgt_poll_group_003", 00:20:01.913 "admin_qpairs": 0, 00:20:01.913 "io_qpairs": 1, 00:20:01.913 "current_admin_qpairs": 0, 00:20:01.913 "current_io_qpairs": 1, 00:20:01.913 "pending_bdev_io": 0, 00:20:01.913 "completed_nvme_io": 19776, 00:20:01.913 "transports": [ 00:20:01.913 { 00:20:01.913 "trtype": "TCP" 00:20:01.913 } 00:20:01.913 ] 00:20:01.913 } 00:20:01.913 ] 00:20:01.913 }' 00:20:01.913 00:35:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:01.913 00:35:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:01.913 00:35:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:01.913 00:35:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:01.913 00:35:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 918072 00:20:10.020 Initializing NVMe Controllers 00:20:10.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:10.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:10.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:10.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:10.020 Initialization complete. Launching workers. 00:20:10.020 ======================================================== 00:20:10.020 Latency(us) 00:20:10.020 Device Information : IOPS MiB/s Average min max 00:20:10.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10388.90 40.58 6161.99 1504.62 10351.60 00:20:10.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10902.70 42.59 5869.76 1942.29 8818.70 00:20:10.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10702.50 41.81 5981.13 2369.60 8896.21 00:20:10.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9923.70 38.76 6450.94 1478.32 10297.55 00:20:10.020 ======================================================== 00:20:10.020 Total : 41917.78 163.74 6108.21 1478.32 10351.60 00:20:10.020 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.020 rmmod nvme_tcp 00:20:10.020 rmmod nvme_fabrics 00:20:10.020 rmmod nvme_keyring 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 918035 ']' 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 918035 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 918035 ']' 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 918035 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 918035 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 918035' 00:20:10.020 killing process with pid 918035 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 918035 00:20:10.020 [2024-05-15 00:35:35.653120] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 918035 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.020 00:35:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.924 00:35:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:11.924 00:35:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:11.924 00:35:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:12.490 00:35:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:14.391 00:35:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.662 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:19.663 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:19.663 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:19.663 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:19.663 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:19.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:20:19.663 00:20:19.663 --- 10.0.0.2 ping statistics --- 00:20:19.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.663 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:20:19.663 00:20:19.663 --- 10.0.0.1 ping statistics --- 00:20:19.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.663 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:19.663 net.core.busy_poll = 1 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:19.663 net.core.busy_read = 1 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=920678 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 920678 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 920678 ']' 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:19.663 00:35:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.663 [2024-05-15 00:35:45.456850] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:20:19.663 [2024-05-15 00:35:45.456953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.663 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.664 [2024-05-15 00:35:45.535748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.664 [2024-05-15 00:35:45.645937] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.664 [2024-05-15 00:35:45.646010] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.664 [2024-05-15 00:35:45.646024] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.664 [2024-05-15 00:35:45.646036] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.664 [2024-05-15 00:35:45.646046] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.664 [2024-05-15 00:35:45.646098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.664 [2024-05-15 00:35:45.646158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.664 [2024-05-15 00:35:45.646223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.664 [2024-05-15 00:35:45.646226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.634 [2024-05-15 00:35:46.593504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.634 Malloc1 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:20.634 [2024-05-15 00:35:46.644127] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:20.634 [2024-05-15 00:35:46.644422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=920836 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:20.634 00:35:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:20.634 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.536 00:35:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:22.536 00:35:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.536 00:35:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.536 00:35:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.536 00:35:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:22.536 "tick_rate": 2700000000, 00:20:22.536 "poll_groups": [ 00:20:22.536 { 00:20:22.537 "name": "nvmf_tgt_poll_group_000", 00:20:22.537 "admin_qpairs": 1, 00:20:22.537 "io_qpairs": 2, 00:20:22.537 "current_admin_qpairs": 1, 00:20:22.537 "current_io_qpairs": 2, 00:20:22.537 "pending_bdev_io": 0, 00:20:22.537 "completed_nvme_io": 26840, 00:20:22.537 "transports": [ 00:20:22.537 { 00:20:22.537 "trtype": "TCP" 00:20:22.537 } 00:20:22.537 ] 00:20:22.537 }, 00:20:22.537 { 00:20:22.537 "name": "nvmf_tgt_poll_group_001", 00:20:22.537 "admin_qpairs": 0, 00:20:22.537 "io_qpairs": 2, 00:20:22.537 "current_admin_qpairs": 0, 00:20:22.537 "current_io_qpairs": 2, 00:20:22.537 "pending_bdev_io": 0, 00:20:22.537 "completed_nvme_io": 21725, 00:20:22.537 "transports": [ 00:20:22.537 { 00:20:22.537 "trtype": "TCP" 00:20:22.537 } 00:20:22.537 ] 00:20:22.537 }, 00:20:22.537 { 00:20:22.537 "name": "nvmf_tgt_poll_group_002", 00:20:22.537 "admin_qpairs": 0, 00:20:22.537 "io_qpairs": 0, 00:20:22.537 "current_admin_qpairs": 0, 00:20:22.537 "current_io_qpairs": 0, 00:20:22.537 "pending_bdev_io": 0, 00:20:22.537 "completed_nvme_io": 0, 00:20:22.537 "transports": [ 00:20:22.537 { 00:20:22.537 "trtype": "TCP" 00:20:22.537 } 00:20:22.537 ] 00:20:22.537 }, 00:20:22.537 { 00:20:22.537 "name": "nvmf_tgt_poll_group_003", 00:20:22.537 "admin_qpairs": 0, 00:20:22.537 "io_qpairs": 0, 00:20:22.537 "current_admin_qpairs": 0, 00:20:22.537 "current_io_qpairs": 0, 00:20:22.537 "pending_bdev_io": 0, 00:20:22.537 "completed_nvme_io": 0, 00:20:22.537 "transports": [ 00:20:22.537 { 00:20:22.537 "trtype": "TCP" 00:20:22.537 } 00:20:22.537 ] 00:20:22.537 } 00:20:22.537 ] 00:20:22.537 }' 00:20:22.537 00:35:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:22.537 00:35:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:22.795 00:35:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:22.795 00:35:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:22.795 00:35:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 920836 00:20:30.911 Initializing NVMe Controllers 00:20:30.911 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:30.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:30.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:30.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:30.911 Initialization complete. Launching workers. 00:20:30.911 ======================================================== 00:20:30.911 Latency(us) 00:20:30.911 Device Information : IOPS MiB/s Average min max 00:20:30.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7609.70 29.73 8419.88 1946.20 53752.24 00:20:30.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6325.10 24.71 10119.04 1749.34 52460.65 00:20:30.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5958.50 23.28 10744.03 2305.44 54872.56 00:20:30.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5592.00 21.84 11449.12 1831.38 57516.96 00:20:30.911 ======================================================== 00:20:30.911 Total : 25485.29 99.55 10049.66 1749.34 57516.96 00:20:30.911 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:30.911 rmmod nvme_tcp 00:20:30.911 rmmod nvme_fabrics 00:20:30.911 rmmod nvme_keyring 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 920678 ']' 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 920678 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 920678 ']' 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 920678 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 920678 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 920678' 00:20:30.911 killing process with pid 920678 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 920678 00:20:30.911 [2024-05-15 00:35:56.859757] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:30.911 00:35:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 920678 00:20:31.171 00:35:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:31.171 00:35:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:31.171 00:35:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:31.171 00:35:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.171 00:35:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:31.171 00:35:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.171 00:35:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.171 00:35:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.458 00:36:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.458 00:36:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:34.458 00:20:34.458 real 0m44.887s 00:20:34.458 user 2m35.750s 00:20:34.458 sys 0m12.353s 00:20:34.458 00:36:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:34.458 00:36:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.458 ************************************ 00:20:34.458 END TEST nvmf_perf_adq 00:20:34.458 ************************************ 00:20:34.458 00:36:00 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:34.458 00:36:00 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:20:34.458 00:36:00 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:34.458 00:36:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:34.458 ************************************ 00:20:34.458 START TEST nvmf_shutdown 00:20:34.458 ************************************ 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:34.458 * Looking for test storage... 00:20:34.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:34.458 ************************************ 00:20:34.458 START TEST nvmf_shutdown_tc1 00:20:34.458 ************************************ 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc1 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.458 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.459 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.459 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.459 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.459 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.459 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:34.459 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:34.459 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.459 00:36:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.991 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:36.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:36.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:36.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:36.992 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:36.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:20:36.992 00:20:36.992 --- 10.0.0.2 ping statistics --- 00:20:36.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.992 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:20:36.992 00:20:36.992 --- 10.0.0.1 ping statistics --- 00:20:36.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.992 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=924537 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 924537 00:20:36.992 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 924537 ']' 00:20:36.993 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.993 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:36.993 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.993 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:36.993 00:36:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.993 [2024-05-15 00:36:02.956715] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:20:36.993 [2024-05-15 00:36:02.956783] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.993 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.993 [2024-05-15 00:36:03.035651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.993 [2024-05-15 00:36:03.153807] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.993 [2024-05-15 00:36:03.153865] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.993 [2024-05-15 00:36:03.153890] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.993 [2024-05-15 00:36:03.153904] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.993 [2024-05-15 00:36:03.153916] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.993 [2024-05-15 00:36:03.154020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.251 [2024-05-15 00:36:03.154418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.251 [2024-05-15 00:36:03.154529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:37.251 [2024-05-15 00:36:03.154532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 [2024-05-15 00:36:03.307592] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.251 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:37.251 Malloc1 00:20:37.251 [2024-05-15 00:36:03.383378] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:37.251 [2024-05-15 00:36:03.383689] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.251 Malloc2 00:20:37.508 Malloc3 00:20:37.508 Malloc4 00:20:37.508 Malloc5 00:20:37.508 Malloc6 00:20:37.508 Malloc7 00:20:37.766 Malloc8 00:20:37.766 Malloc9 00:20:37.766 Malloc10 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=924715 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 924715 /var/tmp/bdevperf.sock 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 924715 ']' 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.766 { 00:20:37.766 "params": { 00:20:37.766 "name": "Nvme$subsystem", 00:20:37.766 "trtype": "$TEST_TRANSPORT", 00:20:37.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.766 "adrfam": "ipv4", 00:20:37.766 "trsvcid": "$NVMF_PORT", 00:20:37.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.766 "hdgst": ${hdgst:-false}, 00:20:37.766 "ddgst": ${ddgst:-false} 00:20:37.766 }, 00:20:37.766 "method": "bdev_nvme_attach_controller" 00:20:37.766 } 00:20:37.766 EOF 00:20:37.766 )") 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.766 { 00:20:37.766 "params": { 00:20:37.766 "name": "Nvme$subsystem", 00:20:37.766 "trtype": "$TEST_TRANSPORT", 00:20:37.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.766 "adrfam": "ipv4", 00:20:37.766 "trsvcid": "$NVMF_PORT", 00:20:37.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.766 "hdgst": ${hdgst:-false}, 00:20:37.766 "ddgst": ${ddgst:-false} 00:20:37.766 }, 00:20:37.766 "method": "bdev_nvme_attach_controller" 00:20:37.766 } 00:20:37.766 EOF 00:20:37.766 )") 00:20:37.766 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.767 { 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme$subsystem", 00:20:37.767 "trtype": "$TEST_TRANSPORT", 00:20:37.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "$NVMF_PORT", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.767 "hdgst": ${hdgst:-false}, 00:20:37.767 "ddgst": ${ddgst:-false} 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 } 00:20:37.767 EOF 00:20:37.767 )") 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.767 { 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme$subsystem", 00:20:37.767 "trtype": "$TEST_TRANSPORT", 00:20:37.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "$NVMF_PORT", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.767 "hdgst": ${hdgst:-false}, 00:20:37.767 "ddgst": ${ddgst:-false} 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 } 00:20:37.767 EOF 00:20:37.767 )") 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.767 { 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme$subsystem", 00:20:37.767 "trtype": "$TEST_TRANSPORT", 00:20:37.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "$NVMF_PORT", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.767 "hdgst": ${hdgst:-false}, 00:20:37.767 "ddgst": ${ddgst:-false} 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 } 00:20:37.767 EOF 00:20:37.767 )") 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.767 { 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme$subsystem", 00:20:37.767 "trtype": "$TEST_TRANSPORT", 00:20:37.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "$NVMF_PORT", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.767 "hdgst": ${hdgst:-false}, 00:20:37.767 "ddgst": ${ddgst:-false} 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 } 00:20:37.767 EOF 00:20:37.767 )") 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.767 { 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme$subsystem", 00:20:37.767 "trtype": "$TEST_TRANSPORT", 00:20:37.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "$NVMF_PORT", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.767 "hdgst": ${hdgst:-false}, 00:20:37.767 "ddgst": ${ddgst:-false} 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 } 00:20:37.767 EOF 00:20:37.767 )") 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.767 { 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme$subsystem", 00:20:37.767 "trtype": "$TEST_TRANSPORT", 00:20:37.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "$NVMF_PORT", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.767 "hdgst": ${hdgst:-false}, 00:20:37.767 "ddgst": ${ddgst:-false} 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 } 00:20:37.767 EOF 00:20:37.767 )") 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.767 { 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme$subsystem", 00:20:37.767 "trtype": "$TEST_TRANSPORT", 00:20:37.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "$NVMF_PORT", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.767 "hdgst": ${hdgst:-false}, 00:20:37.767 "ddgst": ${ddgst:-false} 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 } 00:20:37.767 EOF 00:20:37.767 )") 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.767 { 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme$subsystem", 00:20:37.767 "trtype": "$TEST_TRANSPORT", 00:20:37.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "$NVMF_PORT", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.767 "hdgst": ${hdgst:-false}, 00:20:37.767 "ddgst": ${ddgst:-false} 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 } 00:20:37.767 EOF 00:20:37.767 )") 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:37.767 00:36:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme1", 00:20:37.767 "trtype": "tcp", 00:20:37.767 "traddr": "10.0.0.2", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "4420", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.767 "hdgst": false, 00:20:37.767 "ddgst": false 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 },{ 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme2", 00:20:37.767 "trtype": "tcp", 00:20:37.767 "traddr": "10.0.0.2", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "4420", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:37.767 "hdgst": false, 00:20:37.767 "ddgst": false 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 },{ 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme3", 00:20:37.767 "trtype": "tcp", 00:20:37.767 "traddr": "10.0.0.2", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "4420", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:37.767 "hdgst": false, 00:20:37.767 "ddgst": false 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 },{ 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme4", 00:20:37.767 "trtype": "tcp", 00:20:37.767 "traddr": "10.0.0.2", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "4420", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:37.767 "hdgst": false, 00:20:37.767 "ddgst": false 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 },{ 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme5", 00:20:37.767 "trtype": "tcp", 00:20:37.767 "traddr": "10.0.0.2", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "4420", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:37.767 "hdgst": false, 00:20:37.767 "ddgst": false 00:20:37.767 }, 00:20:37.767 "method": "bdev_nvme_attach_controller" 00:20:37.767 },{ 00:20:37.767 "params": { 00:20:37.767 "name": "Nvme6", 00:20:37.767 "trtype": "tcp", 00:20:37.767 "traddr": "10.0.0.2", 00:20:37.767 "adrfam": "ipv4", 00:20:37.767 "trsvcid": "4420", 00:20:37.767 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:37.767 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:37.767 "hdgst": false, 00:20:37.768 "ddgst": false 00:20:37.768 }, 00:20:37.768 "method": "bdev_nvme_attach_controller" 00:20:37.768 },{ 00:20:37.768 "params": { 00:20:37.768 "name": "Nvme7", 00:20:37.768 "trtype": "tcp", 00:20:37.768 "traddr": "10.0.0.2", 00:20:37.768 "adrfam": "ipv4", 00:20:37.768 "trsvcid": "4420", 00:20:37.768 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:37.768 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:37.768 "hdgst": false, 00:20:37.768 "ddgst": false 00:20:37.768 }, 00:20:37.768 "method": "bdev_nvme_attach_controller" 00:20:37.768 },{ 00:20:37.768 "params": { 00:20:37.768 "name": "Nvme8", 00:20:37.768 "trtype": "tcp", 00:20:37.768 "traddr": "10.0.0.2", 00:20:37.768 "adrfam": "ipv4", 00:20:37.768 "trsvcid": "4420", 00:20:37.768 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:37.768 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:37.768 "hdgst": false, 00:20:37.768 "ddgst": false 00:20:37.768 }, 00:20:37.768 "method": "bdev_nvme_attach_controller" 00:20:37.768 },{ 00:20:37.768 "params": { 00:20:37.768 "name": "Nvme9", 00:20:37.768 "trtype": "tcp", 00:20:37.768 "traddr": "10.0.0.2", 00:20:37.768 "adrfam": "ipv4", 00:20:37.768 "trsvcid": "4420", 00:20:37.768 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:37.768 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:37.768 "hdgst": false, 00:20:37.768 "ddgst": false 00:20:37.768 }, 00:20:37.768 "method": "bdev_nvme_attach_controller" 00:20:37.768 },{ 00:20:37.768 "params": { 00:20:37.768 "name": "Nvme10", 00:20:37.768 "trtype": "tcp", 00:20:37.768 "traddr": "10.0.0.2", 00:20:37.768 "adrfam": "ipv4", 00:20:37.768 "trsvcid": "4420", 00:20:37.768 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:37.768 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:37.768 "hdgst": false, 00:20:37.768 "ddgst": false 00:20:37.768 }, 00:20:37.768 "method": "bdev_nvme_attach_controller" 00:20:37.768 }' 00:20:37.768 [2024-05-15 00:36:03.876050] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:20:37.768 [2024-05-15 00:36:03.876132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:37.768 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.026 [2024-05-15 00:36:03.952657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.026 [2024-05-15 00:36:04.062546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.398 00:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:39.398 00:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:20:39.398 00:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:39.398 00:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.398 00:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:39.398 00:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.398 00:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 924715 00:20:39.398 00:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:39.398 00:36:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:40.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 924715 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 924537 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.333 { 00:20:40.333 "params": { 00:20:40.333 "name": "Nvme$subsystem", 00:20:40.333 "trtype": "$TEST_TRANSPORT", 00:20:40.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.333 "adrfam": "ipv4", 00:20:40.333 "trsvcid": "$NVMF_PORT", 00:20:40.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.333 "hdgst": ${hdgst:-false}, 00:20:40.333 "ddgst": ${ddgst:-false} 00:20:40.333 }, 00:20:40.333 "method": "bdev_nvme_attach_controller" 00:20:40.333 } 00:20:40.333 EOF 00:20:40.333 )") 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.333 { 00:20:40.333 "params": { 00:20:40.333 "name": "Nvme$subsystem", 00:20:40.333 "trtype": "$TEST_TRANSPORT", 00:20:40.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.333 "adrfam": "ipv4", 00:20:40.333 "trsvcid": "$NVMF_PORT", 00:20:40.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.333 "hdgst": ${hdgst:-false}, 00:20:40.333 "ddgst": ${ddgst:-false} 00:20:40.333 }, 00:20:40.333 "method": "bdev_nvme_attach_controller" 00:20:40.333 } 00:20:40.333 EOF 00:20:40.333 )") 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.333 { 00:20:40.333 "params": { 00:20:40.333 "name": "Nvme$subsystem", 00:20:40.333 "trtype": "$TEST_TRANSPORT", 00:20:40.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.333 "adrfam": "ipv4", 00:20:40.333 "trsvcid": "$NVMF_PORT", 00:20:40.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.333 "hdgst": ${hdgst:-false}, 00:20:40.333 "ddgst": ${ddgst:-false} 00:20:40.333 }, 00:20:40.333 "method": "bdev_nvme_attach_controller" 00:20:40.333 } 00:20:40.333 EOF 00:20:40.333 )") 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.333 { 00:20:40.333 "params": { 00:20:40.333 "name": "Nvme$subsystem", 00:20:40.333 "trtype": "$TEST_TRANSPORT", 00:20:40.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.333 "adrfam": "ipv4", 00:20:40.333 "trsvcid": "$NVMF_PORT", 00:20:40.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.333 "hdgst": ${hdgst:-false}, 00:20:40.333 "ddgst": ${ddgst:-false} 00:20:40.333 }, 00:20:40.333 "method": "bdev_nvme_attach_controller" 00:20:40.333 } 00:20:40.333 EOF 00:20:40.333 )") 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.333 { 00:20:40.333 "params": { 00:20:40.333 "name": "Nvme$subsystem", 00:20:40.333 "trtype": "$TEST_TRANSPORT", 00:20:40.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.333 "adrfam": "ipv4", 00:20:40.333 "trsvcid": "$NVMF_PORT", 00:20:40.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.333 "hdgst": ${hdgst:-false}, 00:20:40.333 "ddgst": ${ddgst:-false} 00:20:40.333 }, 00:20:40.333 "method": "bdev_nvme_attach_controller" 00:20:40.333 } 00:20:40.333 EOF 00:20:40.333 )") 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.333 { 00:20:40.333 "params": { 00:20:40.333 "name": "Nvme$subsystem", 00:20:40.333 "trtype": "$TEST_TRANSPORT", 00:20:40.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.333 "adrfam": "ipv4", 00:20:40.333 "trsvcid": "$NVMF_PORT", 00:20:40.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.333 "hdgst": ${hdgst:-false}, 00:20:40.333 "ddgst": ${ddgst:-false} 00:20:40.333 }, 00:20:40.333 "method": "bdev_nvme_attach_controller" 00:20:40.333 } 00:20:40.333 EOF 00:20:40.333 )") 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.333 { 00:20:40.333 "params": { 00:20:40.333 "name": "Nvme$subsystem", 00:20:40.333 "trtype": "$TEST_TRANSPORT", 00:20:40.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.333 "adrfam": "ipv4", 00:20:40.333 "trsvcid": "$NVMF_PORT", 00:20:40.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.333 "hdgst": ${hdgst:-false}, 00:20:40.333 "ddgst": ${ddgst:-false} 00:20:40.333 }, 00:20:40.333 "method": "bdev_nvme_attach_controller" 00:20:40.333 } 00:20:40.333 EOF 00:20:40.333 )") 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.333 { 00:20:40.333 "params": { 00:20:40.333 "name": "Nvme$subsystem", 00:20:40.333 "trtype": "$TEST_TRANSPORT", 00:20:40.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.333 "adrfam": "ipv4", 00:20:40.333 "trsvcid": "$NVMF_PORT", 00:20:40.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.333 "hdgst": ${hdgst:-false}, 00:20:40.333 "ddgst": ${ddgst:-false} 00:20:40.333 }, 00:20:40.333 "method": "bdev_nvme_attach_controller" 00:20:40.333 } 00:20:40.333 EOF 00:20:40.333 )") 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.333 { 00:20:40.333 "params": { 00:20:40.333 "name": "Nvme$subsystem", 00:20:40.333 "trtype": "$TEST_TRANSPORT", 00:20:40.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.333 "adrfam": "ipv4", 00:20:40.333 "trsvcid": "$NVMF_PORT", 00:20:40.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.333 "hdgst": ${hdgst:-false}, 00:20:40.333 "ddgst": ${ddgst:-false} 00:20:40.333 }, 00:20:40.333 "method": "bdev_nvme_attach_controller" 00:20:40.333 } 00:20:40.333 EOF 00:20:40.333 )") 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.333 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.333 { 00:20:40.333 "params": { 00:20:40.333 "name": "Nvme$subsystem", 00:20:40.333 "trtype": "$TEST_TRANSPORT", 00:20:40.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.333 "adrfam": "ipv4", 00:20:40.333 "trsvcid": "$NVMF_PORT", 00:20:40.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.334 "hdgst": ${hdgst:-false}, 00:20:40.334 "ddgst": ${ddgst:-false} 00:20:40.334 }, 00:20:40.334 "method": "bdev_nvme_attach_controller" 00:20:40.334 } 00:20:40.334 EOF 00:20:40.334 )") 00:20:40.334 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:40.334 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:40.334 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:40.334 00:36:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:40.334 "params": { 00:20:40.334 "name": "Nvme1", 00:20:40.334 "trtype": "tcp", 00:20:40.334 "traddr": "10.0.0.2", 00:20:40.334 "adrfam": "ipv4", 00:20:40.334 "trsvcid": "4420", 00:20:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.334 "hdgst": false, 00:20:40.334 "ddgst": false 00:20:40.334 }, 00:20:40.334 "method": "bdev_nvme_attach_controller" 00:20:40.334 },{ 00:20:40.334 "params": { 00:20:40.334 "name": "Nvme2", 00:20:40.334 "trtype": "tcp", 00:20:40.334 "traddr": "10.0.0.2", 00:20:40.334 "adrfam": "ipv4", 00:20:40.334 "trsvcid": "4420", 00:20:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:40.334 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:40.334 "hdgst": false, 00:20:40.334 "ddgst": false 00:20:40.334 }, 00:20:40.334 "method": "bdev_nvme_attach_controller" 00:20:40.334 },{ 00:20:40.334 "params": { 00:20:40.334 "name": "Nvme3", 00:20:40.334 "trtype": "tcp", 00:20:40.334 "traddr": "10.0.0.2", 00:20:40.334 "adrfam": "ipv4", 00:20:40.334 "trsvcid": "4420", 00:20:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:40.334 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:40.334 "hdgst": false, 00:20:40.334 "ddgst": false 00:20:40.334 }, 00:20:40.334 "method": "bdev_nvme_attach_controller" 00:20:40.334 },{ 00:20:40.334 "params": { 00:20:40.334 "name": "Nvme4", 00:20:40.334 "trtype": "tcp", 00:20:40.334 "traddr": "10.0.0.2", 00:20:40.334 "adrfam": "ipv4", 00:20:40.334 "trsvcid": "4420", 00:20:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:40.334 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:40.334 "hdgst": false, 00:20:40.334 "ddgst": false 00:20:40.334 }, 00:20:40.334 "method": "bdev_nvme_attach_controller" 00:20:40.334 },{ 00:20:40.334 "params": { 00:20:40.334 "name": "Nvme5", 00:20:40.334 "trtype": "tcp", 00:20:40.334 "traddr": "10.0.0.2", 00:20:40.334 "adrfam": "ipv4", 00:20:40.334 "trsvcid": "4420", 00:20:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:40.334 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:40.334 "hdgst": false, 00:20:40.334 "ddgst": false 00:20:40.334 }, 00:20:40.334 "method": "bdev_nvme_attach_controller" 00:20:40.334 },{ 00:20:40.334 "params": { 00:20:40.334 "name": "Nvme6", 00:20:40.334 "trtype": "tcp", 00:20:40.334 "traddr": "10.0.0.2", 00:20:40.334 "adrfam": "ipv4", 00:20:40.334 "trsvcid": "4420", 00:20:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:40.334 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:40.334 "hdgst": false, 00:20:40.334 "ddgst": false 00:20:40.334 }, 00:20:40.334 "method": "bdev_nvme_attach_controller" 00:20:40.334 },{ 00:20:40.334 "params": { 00:20:40.334 "name": "Nvme7", 00:20:40.334 "trtype": "tcp", 00:20:40.334 "traddr": "10.0.0.2", 00:20:40.334 "adrfam": "ipv4", 00:20:40.334 "trsvcid": "4420", 00:20:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:40.334 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:40.334 "hdgst": false, 00:20:40.334 "ddgst": false 00:20:40.334 }, 00:20:40.334 "method": "bdev_nvme_attach_controller" 00:20:40.334 },{ 00:20:40.334 "params": { 00:20:40.334 "name": "Nvme8", 00:20:40.334 "trtype": "tcp", 00:20:40.334 "traddr": "10.0.0.2", 00:20:40.334 "adrfam": "ipv4", 00:20:40.334 "trsvcid": "4420", 00:20:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:40.334 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:40.334 "hdgst": false, 00:20:40.334 "ddgst": false 00:20:40.334 }, 00:20:40.334 "method": "bdev_nvme_attach_controller" 00:20:40.334 },{ 00:20:40.334 "params": { 00:20:40.334 "name": "Nvme9", 00:20:40.334 "trtype": "tcp", 00:20:40.334 "traddr": "10.0.0.2", 00:20:40.334 "adrfam": "ipv4", 00:20:40.334 "trsvcid": "4420", 00:20:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:40.334 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:40.334 "hdgst": false, 00:20:40.334 "ddgst": false 00:20:40.334 }, 00:20:40.334 "method": "bdev_nvme_attach_controller" 00:20:40.334 },{ 00:20:40.334 "params": { 00:20:40.334 "name": "Nvme10", 00:20:40.334 "trtype": "tcp", 00:20:40.334 "traddr": "10.0.0.2", 00:20:40.334 "adrfam": "ipv4", 00:20:40.334 "trsvcid": "4420", 00:20:40.334 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:40.334 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:40.334 "hdgst": false, 00:20:40.334 "ddgst": false 00:20:40.334 }, 00:20:40.334 "method": "bdev_nvme_attach_controller" 00:20:40.334 }' 00:20:40.334 [2024-05-15 00:36:06.398478] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:20:40.334 [2024-05-15 00:36:06.398560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925007 ] 00:20:40.334 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.334 [2024-05-15 00:36:06.476088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.592 [2024-05-15 00:36:06.589637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.966 Running I/O for 1 seconds... 00:20:43.375 00:20:43.375 Latency(us) 00:20:43.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.375 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.375 Verification LBA range: start 0x0 length 0x400 00:20:43.375 Nvme1n1 : 1.19 215.98 13.50 0.00 0.00 293052.87 39612.87 236123.78 00:20:43.375 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.375 Verification LBA range: start 0x0 length 0x400 00:20:43.375 Nvme2n1 : 1.19 269.77 16.86 0.00 0.00 230893.64 19709.35 250104.79 00:20:43.375 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.375 Verification LBA range: start 0x0 length 0x400 00:20:43.375 Nvme3n1 : 1.20 266.94 16.68 0.00 0.00 230100.99 21068.61 250104.79 00:20:43.375 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.375 Verification LBA range: start 0x0 length 0x400 00:20:43.375 Nvme4n1 : 1.24 206.16 12.88 0.00 0.00 284397.04 23204.60 284280.60 00:20:43.375 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.375 Verification LBA range: start 0x0 length 0x400 00:20:43.375 Nvme5n1 : 1.20 212.48 13.28 0.00 0.00 279910.78 23301.69 299815.06 00:20:43.375 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.375 Verification LBA range: start 0x0 length 0x400 00:20:43.375 Nvme6n1 : 1.13 226.77 14.17 0.00 0.00 256496.26 18350.08 250104.79 00:20:43.375 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.375 Verification LBA range: start 0x0 length 0x400 00:20:43.375 Nvme7n1 : 1.21 264.66 16.54 0.00 0.00 217551.61 19126.80 281173.71 00:20:43.375 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.375 Verification LBA range: start 0x0 length 0x400 00:20:43.375 Nvme8n1 : 1.20 214.15 13.38 0.00 0.00 264149.52 26991.12 250104.79 00:20:43.375 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.375 Verification LBA range: start 0x0 length 0x400 00:20:43.375 Nvme9n1 : 1.22 210.19 13.14 0.00 0.00 265305.88 23981.32 295154.73 00:20:43.375 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.375 Verification LBA range: start 0x0 length 0x400 00:20:43.375 Nvme10n1 : 1.17 225.60 14.10 0.00 0.00 240390.10 2645.71 253211.69 00:20:43.375 =================================================================================================================== 00:20:43.375 Total : 2312.68 144.54 0.00 0.00 254084.08 2645.71 299815.06 00:20:43.375 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:43.375 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:43.375 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:43.375 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:43.375 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:43.375 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.375 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:43.375 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:43.375 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:43.375 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.375 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:43.375 rmmod nvme_tcp 00:20:43.375 rmmod nvme_fabrics 00:20:43.633 rmmod nvme_keyring 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 924537 ']' 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 924537 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' -z 924537 ']' 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # kill -0 924537 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # uname 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 924537 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 924537' 00:20:43.633 killing process with pid 924537 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # kill 924537 00:20:43.633 [2024-05-15 00:36:09.581283] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:43.633 00:36:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # wait 924537 00:20:44.201 00:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:44.201 00:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:44.201 00:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:44.201 00:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:44.201 00:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:44.201 00:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.201 00:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.201 00:36:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:46.105 00:20:46.105 real 0m11.842s 00:20:46.105 user 0m31.826s 00:20:46.105 sys 0m3.513s 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 ************************************ 00:20:46.105 END TEST nvmf_shutdown_tc1 00:20:46.105 ************************************ 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 ************************************ 00:20:46.105 START TEST nvmf_shutdown_tc2 00:20:46.105 ************************************ 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc2 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:46.105 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:46.106 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:46.106 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:46.106 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:46.106 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:46.106 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:46.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:20:46.365 00:20:46.365 --- 10.0.0.2 ping statistics --- 00:20:46.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.365 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:20:46.365 00:20:46.365 --- 10.0.0.1 ping statistics --- 00:20:46.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.365 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.365 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=926371 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 926371 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 926371 ']' 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:46.366 00:36:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:46.366 [2024-05-15 00:36:12.463324] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:20:46.366 [2024-05-15 00:36:12.463419] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.366 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.624 [2024-05-15 00:36:12.547563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.624 [2024-05-15 00:36:12.664574] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.624 [2024-05-15 00:36:12.664633] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.624 [2024-05-15 00:36:12.664649] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.624 [2024-05-15 00:36:12.664663] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.624 [2024-05-15 00:36:12.664674] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.624 [2024-05-15 00:36:12.664757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.624 [2024-05-15 00:36:12.664878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.624 [2024-05-15 00:36:12.665014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:46.624 [2024-05-15 00:36:12.665019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.558 [2024-05-15 00:36:13.428802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.558 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.558 Malloc1 00:20:47.558 [2024-05-15 00:36:13.512358] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:47.558 [2024-05-15 00:36:13.512649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.558 Malloc2 00:20:47.558 Malloc3 00:20:47.558 Malloc4 00:20:47.558 Malloc5 00:20:47.817 Malloc6 00:20:47.817 Malloc7 00:20:47.817 Malloc8 00:20:47.817 Malloc9 00:20:47.817 Malloc10 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=926594 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 926594 /var/tmp/bdevperf.sock 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 926594 ']' 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:47.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.817 { 00:20:47.817 "params": { 00:20:47.817 "name": "Nvme$subsystem", 00:20:47.817 "trtype": "$TEST_TRANSPORT", 00:20:47.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.817 "adrfam": "ipv4", 00:20:47.817 "trsvcid": "$NVMF_PORT", 00:20:47.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.817 "hdgst": ${hdgst:-false}, 00:20:47.817 "ddgst": ${ddgst:-false} 00:20:47.817 }, 00:20:47.817 "method": "bdev_nvme_attach_controller" 00:20:47.817 } 00:20:47.817 EOF 00:20:47.817 )") 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.817 { 00:20:47.817 "params": { 00:20:47.817 "name": "Nvme$subsystem", 00:20:47.817 "trtype": "$TEST_TRANSPORT", 00:20:47.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.817 "adrfam": "ipv4", 00:20:47.817 "trsvcid": "$NVMF_PORT", 00:20:47.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.817 "hdgst": ${hdgst:-false}, 00:20:47.817 "ddgst": ${ddgst:-false} 00:20:47.817 }, 00:20:47.817 "method": "bdev_nvme_attach_controller" 00:20:47.817 } 00:20:47.817 EOF 00:20:47.817 )") 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.817 { 00:20:47.817 "params": { 00:20:47.817 "name": "Nvme$subsystem", 00:20:47.817 "trtype": "$TEST_TRANSPORT", 00:20:47.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.817 "adrfam": "ipv4", 00:20:47.817 "trsvcid": "$NVMF_PORT", 00:20:47.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.817 "hdgst": ${hdgst:-false}, 00:20:47.817 "ddgst": ${ddgst:-false} 00:20:47.817 }, 00:20:47.817 "method": "bdev_nvme_attach_controller" 00:20:47.817 } 00:20:47.817 EOF 00:20:47.817 )") 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.817 { 00:20:47.817 "params": { 00:20:47.817 "name": "Nvme$subsystem", 00:20:47.817 "trtype": "$TEST_TRANSPORT", 00:20:47.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.817 "adrfam": "ipv4", 00:20:47.817 "trsvcid": "$NVMF_PORT", 00:20:47.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.817 "hdgst": ${hdgst:-false}, 00:20:47.817 "ddgst": ${ddgst:-false} 00:20:47.817 }, 00:20:47.817 "method": "bdev_nvme_attach_controller" 00:20:47.817 } 00:20:47.817 EOF 00:20:47.817 )") 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.817 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.817 { 00:20:47.818 "params": { 00:20:47.818 "name": "Nvme$subsystem", 00:20:47.818 "trtype": "$TEST_TRANSPORT", 00:20:47.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.818 "adrfam": "ipv4", 00:20:47.818 "trsvcid": "$NVMF_PORT", 00:20:47.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.818 "hdgst": ${hdgst:-false}, 00:20:47.818 "ddgst": ${ddgst:-false} 00:20:47.818 }, 00:20:47.818 "method": "bdev_nvme_attach_controller" 00:20:47.818 } 00:20:47.818 EOF 00:20:47.818 )") 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.076 { 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme$subsystem", 00:20:48.076 "trtype": "$TEST_TRANSPORT", 00:20:48.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "$NVMF_PORT", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.076 "hdgst": ${hdgst:-false}, 00:20:48.076 "ddgst": ${ddgst:-false} 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 } 00:20:48.076 EOF 00:20:48.076 )") 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.076 { 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme$subsystem", 00:20:48.076 "trtype": "$TEST_TRANSPORT", 00:20:48.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "$NVMF_PORT", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.076 "hdgst": ${hdgst:-false}, 00:20:48.076 "ddgst": ${ddgst:-false} 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 } 00:20:48.076 EOF 00:20:48.076 )") 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.076 { 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme$subsystem", 00:20:48.076 "trtype": "$TEST_TRANSPORT", 00:20:48.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "$NVMF_PORT", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.076 "hdgst": ${hdgst:-false}, 00:20:48.076 "ddgst": ${ddgst:-false} 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 } 00:20:48.076 EOF 00:20:48.076 )") 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.076 { 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme$subsystem", 00:20:48.076 "trtype": "$TEST_TRANSPORT", 00:20:48.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "$NVMF_PORT", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.076 "hdgst": ${hdgst:-false}, 00:20:48.076 "ddgst": ${ddgst:-false} 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 } 00:20:48.076 EOF 00:20:48.076 )") 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.076 { 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme$subsystem", 00:20:48.076 "trtype": "$TEST_TRANSPORT", 00:20:48.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "$NVMF_PORT", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.076 "hdgst": ${hdgst:-false}, 00:20:48.076 "ddgst": ${ddgst:-false} 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 } 00:20:48.076 EOF 00:20:48.076 )") 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:48.076 00:36:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme1", 00:20:48.076 "trtype": "tcp", 00:20:48.076 "traddr": "10.0.0.2", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "4420", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.076 "hdgst": false, 00:20:48.076 "ddgst": false 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 },{ 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme2", 00:20:48.076 "trtype": "tcp", 00:20:48.076 "traddr": "10.0.0.2", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "4420", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:48.076 "hdgst": false, 00:20:48.076 "ddgst": false 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 },{ 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme3", 00:20:48.076 "trtype": "tcp", 00:20:48.076 "traddr": "10.0.0.2", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "4420", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:48.076 "hdgst": false, 00:20:48.076 "ddgst": false 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 },{ 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme4", 00:20:48.076 "trtype": "tcp", 00:20:48.076 "traddr": "10.0.0.2", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "4420", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:48.076 "hdgst": false, 00:20:48.076 "ddgst": false 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 },{ 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme5", 00:20:48.076 "trtype": "tcp", 00:20:48.076 "traddr": "10.0.0.2", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "4420", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:48.076 "hdgst": false, 00:20:48.076 "ddgst": false 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 },{ 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme6", 00:20:48.076 "trtype": "tcp", 00:20:48.076 "traddr": "10.0.0.2", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "4420", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:48.076 "hdgst": false, 00:20:48.076 "ddgst": false 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 },{ 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme7", 00:20:48.076 "trtype": "tcp", 00:20:48.076 "traddr": "10.0.0.2", 00:20:48.076 "adrfam": "ipv4", 00:20:48.076 "trsvcid": "4420", 00:20:48.076 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:48.076 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:48.076 "hdgst": false, 00:20:48.076 "ddgst": false 00:20:48.076 }, 00:20:48.076 "method": "bdev_nvme_attach_controller" 00:20:48.076 },{ 00:20:48.076 "params": { 00:20:48.076 "name": "Nvme8", 00:20:48.077 "trtype": "tcp", 00:20:48.077 "traddr": "10.0.0.2", 00:20:48.077 "adrfam": "ipv4", 00:20:48.077 "trsvcid": "4420", 00:20:48.077 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:48.077 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:48.077 "hdgst": false, 00:20:48.077 "ddgst": false 00:20:48.077 }, 00:20:48.077 "method": "bdev_nvme_attach_controller" 00:20:48.077 },{ 00:20:48.077 "params": { 00:20:48.077 "name": "Nvme9", 00:20:48.077 "trtype": "tcp", 00:20:48.077 "traddr": "10.0.0.2", 00:20:48.077 "adrfam": "ipv4", 00:20:48.077 "trsvcid": "4420", 00:20:48.077 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:48.077 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:48.077 "hdgst": false, 00:20:48.077 "ddgst": false 00:20:48.077 }, 00:20:48.077 "method": "bdev_nvme_attach_controller" 00:20:48.077 },{ 00:20:48.077 "params": { 00:20:48.077 "name": "Nvme10", 00:20:48.077 "trtype": "tcp", 00:20:48.077 "traddr": "10.0.0.2", 00:20:48.077 "adrfam": "ipv4", 00:20:48.077 "trsvcid": "4420", 00:20:48.077 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:48.077 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:48.077 "hdgst": false, 00:20:48.077 "ddgst": false 00:20:48.077 }, 00:20:48.077 "method": "bdev_nvme_attach_controller" 00:20:48.077 }' 00:20:48.077 [2024-05-15 00:36:14.008252] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:20:48.077 [2024-05-15 00:36:14.008358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926594 ] 00:20:48.077 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.077 [2024-05-15 00:36:14.083115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.077 [2024-05-15 00:36:14.192788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.977 Running I/O for 10 seconds... 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:49.977 00:36:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:50.235 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:50.235 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:50.235 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:50.235 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:50.235 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.235 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:50.235 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.235 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:50.235 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:50.235 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 926594 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 926594 ']' 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 926594 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 926594 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 926594' 00:20:50.494 killing process with pid 926594 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 926594 00:20:50.494 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 926594 00:20:50.494 Received shutdown signal, test time was about 0.919993 seconds 00:20:50.494 00:20:50.494 Latency(us) 00:20:50.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.494 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.494 Verification LBA range: start 0x0 length 0x400 00:20:50.494 Nvme1n1 : 0.91 211.73 13.23 0.00 0.00 297960.74 23495.87 267192.70 00:20:50.494 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.494 Verification LBA range: start 0x0 length 0x400 00:20:50.494 Nvme2n1 : 0.91 210.32 13.14 0.00 0.00 294399.24 22039.51 340204.66 00:20:50.494 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.494 Verification LBA range: start 0x0 length 0x400 00:20:50.494 Nvme3n1 : 0.87 219.95 13.75 0.00 0.00 274899.25 23495.87 271853.04 00:20:50.494 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.494 Verification LBA range: start 0x0 length 0x400 00:20:50.494 Nvme4n1 : 0.89 216.60 13.54 0.00 0.00 273086.89 22622.06 278066.82 00:20:50.494 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.494 Verification LBA range: start 0x0 length 0x400 00:20:50.494 Nvme5n1 : 0.92 208.88 13.06 0.00 0.00 278057.21 23301.69 354185.67 00:20:50.494 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.494 Verification LBA range: start 0x0 length 0x400 00:20:50.494 Nvme6n1 : 0.88 146.05 9.13 0.00 0.00 386795.90 39807.05 326223.64 00:20:50.494 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.494 Verification LBA range: start 0x0 length 0x400 00:20:50.494 Nvme7n1 : 0.90 213.80 13.36 0.00 0.00 258840.15 24660.95 301368.51 00:20:50.494 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.494 Verification LBA range: start 0x0 length 0x400 00:20:50.494 Nvme8n1 : 0.90 212.65 13.29 0.00 0.00 254453.38 21554.06 306028.85 00:20:50.494 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.494 Verification LBA range: start 0x0 length 0x400 00:20:50.494 Nvme9n1 : 0.87 147.87 9.24 0.00 0.00 354179.22 22524.97 347971.89 00:20:50.494 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.494 Verification LBA range: start 0x0 length 0x400 00:20:50.494 Nvme10n1 : 0.89 143.12 8.94 0.00 0.00 359654.59 23592.96 385254.59 00:20:50.494 =================================================================================================================== 00:20:50.494 Total : 1930.96 120.68 0.00 0.00 296161.11 21554.06 385254.59 00:20:51.061 00:36:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 926371 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.994 00:36:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.994 rmmod nvme_tcp 00:20:51.994 rmmod nvme_fabrics 00:20:51.994 rmmod nvme_keyring 00:20:51.994 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.994 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:51.994 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 926371 ']' 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 926371 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 926371 ']' 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 926371 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 926371 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 926371' 00:20:51.995 killing process with pid 926371 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 926371 00:20:51.995 [2024-05-15 00:36:18.037626] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:51.995 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 926371 00:20:52.561 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:52.561 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:52.561 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:52.561 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.561 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.561 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.561 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.561 00:36:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.468 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:54.468 00:20:54.468 real 0m8.387s 00:20:54.468 user 0m25.656s 00:20:54.468 sys 0m1.615s 00:20:54.468 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:54.468 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.468 ************************************ 00:20:54.468 END TEST nvmf_shutdown_tc2 00:20:54.468 ************************************ 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:54.727 ************************************ 00:20:54.727 START TEST nvmf_shutdown_tc3 00:20:54.727 ************************************ 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc3 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.727 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:54.728 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:54.728 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:54.728 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:54.728 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:54.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:20:54.728 00:20:54.728 --- 10.0.0.2 ping statistics --- 00:20:54.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.728 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:54.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:20:54.728 00:20:54.728 --- 10.0.0.1 ping statistics --- 00:20:54.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.728 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:54.728 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:54.729 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=927513 00:20:54.729 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:54.729 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 927513 00:20:54.729 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 927513 ']' 00:20:54.729 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.729 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:54.729 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.729 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:54.729 00:36:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:54.987 [2024-05-15 00:36:20.908516] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:20:54.987 [2024-05-15 00:36:20.908603] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.987 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.987 [2024-05-15 00:36:20.983455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:54.987 [2024-05-15 00:36:21.095571] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.987 [2024-05-15 00:36:21.095629] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.987 [2024-05-15 00:36:21.095645] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.987 [2024-05-15 00:36:21.095658] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.987 [2024-05-15 00:36:21.095670] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.987 [2024-05-15 00:36:21.095758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.987 [2024-05-15 00:36:21.095800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:54.987 [2024-05-15 00:36:21.095874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:54.987 [2024-05-15 00:36:21.095877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.246 [2024-05-15 00:36:21.252827] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:55.246 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.247 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.247 Malloc1 00:20:55.247 [2024-05-15 00:36:21.342383] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:55.247 [2024-05-15 00:36:21.342703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.247 Malloc2 00:20:55.505 Malloc3 00:20:55.505 Malloc4 00:20:55.505 Malloc5 00:20:55.505 Malloc6 00:20:55.505 Malloc7 00:20:55.764 Malloc8 00:20:55.764 Malloc9 00:20:55.764 Malloc10 00:20:55.764 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.764 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:55.764 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:55.764 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.764 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=927691 00:20:55.764 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 927691 /var/tmp/bdevperf.sock 00:20:55.764 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 927691 ']' 00:20:55.764 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.764 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:55.764 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:55.764 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.765 { 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme$subsystem", 00:20:55.765 "trtype": "$TEST_TRANSPORT", 00:20:55.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "$NVMF_PORT", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.765 "hdgst": ${hdgst:-false}, 00:20:55.765 "ddgst": ${ddgst:-false} 00:20:55.765 }, 00:20:55.765 "method": "bdev_nvme_attach_controller" 00:20:55.765 } 00:20:55.765 EOF 00:20:55.765 )") 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.765 { 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme$subsystem", 00:20:55.765 "trtype": "$TEST_TRANSPORT", 00:20:55.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "$NVMF_PORT", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.765 "hdgst": ${hdgst:-false}, 00:20:55.765 "ddgst": ${ddgst:-false} 00:20:55.765 }, 00:20:55.765 "method": "bdev_nvme_attach_controller" 00:20:55.765 } 00:20:55.765 EOF 00:20:55.765 )") 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.765 { 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme$subsystem", 00:20:55.765 "trtype": "$TEST_TRANSPORT", 00:20:55.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "$NVMF_PORT", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.765 "hdgst": ${hdgst:-false}, 00:20:55.765 "ddgst": ${ddgst:-false} 00:20:55.765 }, 00:20:55.765 "method": "bdev_nvme_attach_controller" 00:20:55.765 } 00:20:55.765 EOF 00:20:55.765 )") 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.765 { 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme$subsystem", 00:20:55.765 "trtype": "$TEST_TRANSPORT", 00:20:55.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "$NVMF_PORT", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.765 "hdgst": ${hdgst:-false}, 00:20:55.765 "ddgst": ${ddgst:-false} 00:20:55.765 }, 00:20:55.765 "method": "bdev_nvme_attach_controller" 00:20:55.765 } 00:20:55.765 EOF 00:20:55.765 )") 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.765 { 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme$subsystem", 00:20:55.765 "trtype": "$TEST_TRANSPORT", 00:20:55.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "$NVMF_PORT", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.765 "hdgst": ${hdgst:-false}, 00:20:55.765 "ddgst": ${ddgst:-false} 00:20:55.765 }, 00:20:55.765 "method": "bdev_nvme_attach_controller" 00:20:55.765 } 00:20:55.765 EOF 00:20:55.765 )") 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.765 { 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme$subsystem", 00:20:55.765 "trtype": "$TEST_TRANSPORT", 00:20:55.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "$NVMF_PORT", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.765 "hdgst": ${hdgst:-false}, 00:20:55.765 "ddgst": ${ddgst:-false} 00:20:55.765 }, 00:20:55.765 "method": "bdev_nvme_attach_controller" 00:20:55.765 } 00:20:55.765 EOF 00:20:55.765 )") 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.765 { 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme$subsystem", 00:20:55.765 "trtype": "$TEST_TRANSPORT", 00:20:55.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "$NVMF_PORT", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.765 "hdgst": ${hdgst:-false}, 00:20:55.765 "ddgst": ${ddgst:-false} 00:20:55.765 }, 00:20:55.765 "method": "bdev_nvme_attach_controller" 00:20:55.765 } 00:20:55.765 EOF 00:20:55.765 )") 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.765 { 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme$subsystem", 00:20:55.765 "trtype": "$TEST_TRANSPORT", 00:20:55.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "$NVMF_PORT", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.765 "hdgst": ${hdgst:-false}, 00:20:55.765 "ddgst": ${ddgst:-false} 00:20:55.765 }, 00:20:55.765 "method": "bdev_nvme_attach_controller" 00:20:55.765 } 00:20:55.765 EOF 00:20:55.765 )") 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.765 { 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme$subsystem", 00:20:55.765 "trtype": "$TEST_TRANSPORT", 00:20:55.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "$NVMF_PORT", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.765 "hdgst": ${hdgst:-false}, 00:20:55.765 "ddgst": ${ddgst:-false} 00:20:55.765 }, 00:20:55.765 "method": "bdev_nvme_attach_controller" 00:20:55.765 } 00:20:55.765 EOF 00:20:55.765 )") 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.765 { 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme$subsystem", 00:20:55.765 "trtype": "$TEST_TRANSPORT", 00:20:55.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "$NVMF_PORT", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.765 "hdgst": ${hdgst:-false}, 00:20:55.765 "ddgst": ${ddgst:-false} 00:20:55.765 }, 00:20:55.765 "method": "bdev_nvme_attach_controller" 00:20:55.765 } 00:20:55.765 EOF 00:20:55.765 )") 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:55.765 00:36:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme1", 00:20:55.765 "trtype": "tcp", 00:20:55.765 "traddr": "10.0.0.2", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "4420", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.765 "hdgst": false, 00:20:55.765 "ddgst": false 00:20:55.765 }, 00:20:55.765 "method": "bdev_nvme_attach_controller" 00:20:55.765 },{ 00:20:55.765 "params": { 00:20:55.765 "name": "Nvme2", 00:20:55.765 "trtype": "tcp", 00:20:55.765 "traddr": "10.0.0.2", 00:20:55.765 "adrfam": "ipv4", 00:20:55.765 "trsvcid": "4420", 00:20:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:55.765 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:55.766 "hdgst": false, 00:20:55.766 "ddgst": false 00:20:55.766 }, 00:20:55.766 "method": "bdev_nvme_attach_controller" 00:20:55.766 },{ 00:20:55.766 "params": { 00:20:55.766 "name": "Nvme3", 00:20:55.766 "trtype": "tcp", 00:20:55.766 "traddr": "10.0.0.2", 00:20:55.766 "adrfam": "ipv4", 00:20:55.766 "trsvcid": "4420", 00:20:55.766 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:55.766 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:55.766 "hdgst": false, 00:20:55.766 "ddgst": false 00:20:55.766 }, 00:20:55.766 "method": "bdev_nvme_attach_controller" 00:20:55.766 },{ 00:20:55.766 "params": { 00:20:55.766 "name": "Nvme4", 00:20:55.766 "trtype": "tcp", 00:20:55.766 "traddr": "10.0.0.2", 00:20:55.766 "adrfam": "ipv4", 00:20:55.766 "trsvcid": "4420", 00:20:55.766 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:55.766 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:55.766 "hdgst": false, 00:20:55.766 "ddgst": false 00:20:55.766 }, 00:20:55.766 "method": "bdev_nvme_attach_controller" 00:20:55.766 },{ 00:20:55.766 "params": { 00:20:55.766 "name": "Nvme5", 00:20:55.766 "trtype": "tcp", 00:20:55.766 "traddr": "10.0.0.2", 00:20:55.766 "adrfam": "ipv4", 00:20:55.766 "trsvcid": "4420", 00:20:55.766 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:55.766 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:55.766 "hdgst": false, 00:20:55.766 "ddgst": false 00:20:55.766 }, 00:20:55.766 "method": "bdev_nvme_attach_controller" 00:20:55.766 },{ 00:20:55.766 "params": { 00:20:55.766 "name": "Nvme6", 00:20:55.766 "trtype": "tcp", 00:20:55.766 "traddr": "10.0.0.2", 00:20:55.766 "adrfam": "ipv4", 00:20:55.766 "trsvcid": "4420", 00:20:55.766 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:55.766 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:55.766 "hdgst": false, 00:20:55.766 "ddgst": false 00:20:55.766 }, 00:20:55.766 "method": "bdev_nvme_attach_controller" 00:20:55.766 },{ 00:20:55.766 "params": { 00:20:55.766 "name": "Nvme7", 00:20:55.766 "trtype": "tcp", 00:20:55.766 "traddr": "10.0.0.2", 00:20:55.766 "adrfam": "ipv4", 00:20:55.766 "trsvcid": "4420", 00:20:55.766 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:55.766 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:55.766 "hdgst": false, 00:20:55.766 "ddgst": false 00:20:55.766 }, 00:20:55.766 "method": "bdev_nvme_attach_controller" 00:20:55.766 },{ 00:20:55.766 "params": { 00:20:55.766 "name": "Nvme8", 00:20:55.766 "trtype": "tcp", 00:20:55.766 "traddr": "10.0.0.2", 00:20:55.766 "adrfam": "ipv4", 00:20:55.766 "trsvcid": "4420", 00:20:55.766 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:55.766 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:55.766 "hdgst": false, 00:20:55.766 "ddgst": false 00:20:55.766 }, 00:20:55.766 "method": "bdev_nvme_attach_controller" 00:20:55.766 },{ 00:20:55.766 "params": { 00:20:55.766 "name": "Nvme9", 00:20:55.766 "trtype": "tcp", 00:20:55.766 "traddr": "10.0.0.2", 00:20:55.766 "adrfam": "ipv4", 00:20:55.766 "trsvcid": "4420", 00:20:55.766 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:55.766 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:55.766 "hdgst": false, 00:20:55.766 "ddgst": false 00:20:55.766 }, 00:20:55.766 "method": "bdev_nvme_attach_controller" 00:20:55.766 },{ 00:20:55.766 "params": { 00:20:55.766 "name": "Nvme10", 00:20:55.766 "trtype": "tcp", 00:20:55.766 "traddr": "10.0.0.2", 00:20:55.766 "adrfam": "ipv4", 00:20:55.766 "trsvcid": "4420", 00:20:55.766 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:55.766 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:55.766 "hdgst": false, 00:20:55.766 "ddgst": false 00:20:55.766 }, 00:20:55.766 "method": "bdev_nvme_attach_controller" 00:20:55.766 }' 00:20:55.766 [2024-05-15 00:36:21.863308] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:20:55.766 [2024-05-15 00:36:21.863384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927691 ] 00:20:55.766 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.032 [2024-05-15 00:36:21.936963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.032 [2024-05-15 00:36:22.046449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.995 Running I/O for 10 seconds... 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:57.995 00:36:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.275 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 927513 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' -z 927513 ']' 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # kill -0 927513 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # uname 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 927513 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 927513' 00:20:58.552 killing process with pid 927513 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # kill 927513 00:20:58.552 [2024-05-15 00:36:24.512954] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:58.552 00:36:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # wait 927513 00:20:58.552 [2024-05-15 00:36:24.520268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.520999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.521012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.521024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.521036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.521048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21deef0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.522093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e18b0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.522126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e18b0 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.523351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.523375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.523388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.523400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.552 [2024-05-15 00:36:24.523413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.523999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.524011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.524024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.524036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.524048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.524061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.524073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.524085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.524097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.524109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.524121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.524134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df390 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.553 [2024-05-15 00:36:24.525887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.525898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.525909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.525921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.525941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.525955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.525967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.525989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.526255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df830 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.527944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.554 [2024-05-15 00:36:24.528528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.528540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.528552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.528564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.528576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.528591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0170 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.529999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.530408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0630 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.531987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.532004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.532017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.555 [2024-05-15 00:36:24.532029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.532549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0ad0 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.556 [2024-05-15 00:36:24.533921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.533947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.533962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.533973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.533986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.533998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.534384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0f70 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.557 [2024-05-15 00:36:24.535783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.535795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.535808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.535823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.535835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.535847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.535859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.535871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.535883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.535895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.535907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.535919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e1410 is same with the state(5) to be set 00:20:58.558 [2024-05-15 00:36:24.539223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.539963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.539988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.558 [2024-05-15 00:36:24.540361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.558 [2024-05-15 00:36:24.540374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.540975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.540990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.541004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.541020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.541034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.541049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.541067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.541083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.541097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.541113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.541127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.541142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.541156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.541171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.541185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.541200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.559 [2024-05-15 00:36:24.541213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.541271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:58.559 [2024-05-15 00:36:24.541809] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e82b60 was disconnected and freed. reset controller. 00:20:58.559 [2024-05-15 00:36:24.541921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.559 [2024-05-15 00:36:24.541951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.541967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.559 [2024-05-15 00:36:24.541989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.542002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.559 [2024-05-15 00:36:24.542016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.542029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.559 [2024-05-15 00:36:24.542042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.542056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2e8b0 is same with the state(5) to be set 00:20:58.559 [2024-05-15 00:36:24.542107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.559 [2024-05-15 00:36:24.542127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.542143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.559 [2024-05-15 00:36:24.542161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.542176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.559 [2024-05-15 00:36:24.542189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.542203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.559 [2024-05-15 00:36:24.542216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.542235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbd0d0 is same with the state(5) to be set 00:20:58.559 [2024-05-15 00:36:24.542284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.559 [2024-05-15 00:36:24.542304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.542318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.559 [2024-05-15 00:36:24.542331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.559 [2024-05-15 00:36:24.542345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.559 [2024-05-15 00:36:24.542358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8f6d0 is same with the state(5) to be set 00:20:58.560 [2024-05-15 00:36:24.542442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbcdc0 is same with the state(5) to be set 00:20:58.560 [2024-05-15 00:36:24.542666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfa50 is same with the state(5) to be set 00:20:58.560 [2024-05-15 00:36:24.542824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.542924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.542945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2da40 is same with the state(5) to be set 00:20:58.560 [2024-05-15 00:36:24.543005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8fc00 is same with the state(5) to be set 00:20:58.560 [2024-05-15 00:36:24.543167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d747c0 is same with the state(5) to be set 00:20:58.560 [2024-05-15 00:36:24.543325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d82ec0 is same with the state(5) to be set 00:20:58.560 [2024-05-15 00:36:24.543484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.560 [2024-05-15 00:36:24.543585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.543602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed10 is same with the state(5) to be set 00:20:58.560 [2024-05-15 00:36:24.544571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-05-15 00:36:24.544597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.544621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.560 [2024-05-15 00:36:24.544636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.560 [2024-05-15 00:36:24.544653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.544667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.544683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.544697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.544713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.544726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.544742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.544755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.544771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.544785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.544801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.544814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.544830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.544844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.544860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.544873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.544889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.544902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.544918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.544939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.544963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.544984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.544999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.561 [2024-05-15 00:36:24.545878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.561 [2024-05-15 00:36:24.545894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.545908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.545923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.545943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.545960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.545985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.546520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.546535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62440 is same with the state(5) to be set 00:20:58.562 [2024-05-15 00:36:24.546623] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d62440 was disconnected and freed. reset controller. 00:20:58.562 [2024-05-15 00:36:24.550301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:58.562 [2024-05-15 00:36:24.550349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:58.562 [2024-05-15 00:36:24.550379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8f6d0 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.550404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddfa50 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.552332] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.562 [2024-05-15 00:36:24.552400] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.562 [2024-05-15 00:36:24.552468] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.562 [2024-05-15 00:36:24.552531] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.562 [2024-05-15 00:36:24.552600] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.562 [2024-05-15 00:36:24.552833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.562 [2024-05-15 00:36:24.553021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.562 [2024-05-15 00:36:24.553054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddfa50 with addr=10.0.0.2, port=4420 00:20:58.562 [2024-05-15 00:36:24.553074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfa50 is same with the state(5) to be set 00:20:58.562 [2024-05-15 00:36:24.553243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.562 [2024-05-15 00:36:24.553414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.562 [2024-05-15 00:36:24.553439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8f6d0 with addr=10.0.0.2, port=4420 00:20:58.562 [2024-05-15 00:36:24.553455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8f6d0 is same with the state(5) to be set 00:20:58.562 [2024-05-15 00:36:24.553477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2e8b0 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.553513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbd0d0 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.553547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbcdc0 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.553578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2da40 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.553609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8fc00 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.553639] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d747c0 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.553667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d82ec0 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.553706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3ed10 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.553873] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.562 [2024-05-15 00:36:24.554317] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:58.562 [2024-05-15 00:36:24.554465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddfa50 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.554494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8f6d0 (9): Bad file descriptor 00:20:58.562 [2024-05-15 00:36:24.554675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:58.562 [2024-05-15 00:36:24.554698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:58.562 [2024-05-15 00:36:24.554716] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:58.562 [2024-05-15 00:36:24.554737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:58.562 [2024-05-15 00:36:24.554750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:58.562 [2024-05-15 00:36:24.554763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:58.562 [2024-05-15 00:36:24.554829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.554854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.554879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.554895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.554911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.554925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.554949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.562 [2024-05-15 00:36:24.554964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.562 [2024-05-15 00:36:24.554988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.555984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.555998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.556013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.556027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.556043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.556056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.556072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.556086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.556102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.556115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.556131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.556145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.556160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.556174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.556190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.556203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.563 [2024-05-15 00:36:24.556219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.563 [2024-05-15 00:36:24.556240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.556777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.556792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e894b0 is same with the state(5) to be set 00:20:58.564 [2024-05-15 00:36:24.556884] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e894b0 was disconnected and freed. reset controller. 00:20:58.564 [2024-05-15 00:36:24.556951] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.564 [2024-05-15 00:36:24.556979] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.564 [2024-05-15 00:36:24.558198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:58.564 [2024-05-15 00:36:24.558457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.564 [2024-05-15 00:36:24.558655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.564 [2024-05-15 00:36:24.558679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d747c0 with addr=10.0.0.2, port=4420 00:20:58.564 [2024-05-15 00:36:24.558695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d747c0 is same with the state(5) to be set 00:20:58.564 [2024-05-15 00:36:24.559045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d747c0 (9): Bad file descriptor 00:20:58.564 [2024-05-15 00:36:24.559119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:58.564 [2024-05-15 00:36:24.559139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:58.564 [2024-05-15 00:36:24.559153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:58.564 [2024-05-15 00:36:24.559219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.564 [2024-05-15 00:36:24.561662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:58.564 [2024-05-15 00:36:24.561742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:58.564 [2024-05-15 00:36:24.561934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.564 [2024-05-15 00:36:24.562100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.564 [2024-05-15 00:36:24.562126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8f6d0 with addr=10.0.0.2, port=4420 00:20:58.564 [2024-05-15 00:36:24.562142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8f6d0 is same with the state(5) to be set 00:20:58.564 [2024-05-15 00:36:24.562367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.564 [2024-05-15 00:36:24.562524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.564 [2024-05-15 00:36:24.562548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddfa50 with addr=10.0.0.2, port=4420 00:20:58.564 [2024-05-15 00:36:24.562563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfa50 is same with the state(5) to be set 00:20:58.564 [2024-05-15 00:36:24.562582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8f6d0 (9): Bad file descriptor 00:20:58.564 [2024-05-15 00:36:24.562639] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddfa50 (9): Bad file descriptor 00:20:58.564 [2024-05-15 00:36:24.562660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:58.564 [2024-05-15 00:36:24.562673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:58.564 [2024-05-15 00:36:24.562685] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:58.564 [2024-05-15 00:36:24.562741] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.564 [2024-05-15 00:36:24.562758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:58.564 [2024-05-15 00:36:24.562771] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:58.564 [2024-05-15 00:36:24.562783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:58.564 [2024-05-15 00:36:24.562882] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.564 [2024-05-15 00:36:24.562993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.563018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.563044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.563060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.563076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.563089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.563105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.563118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.563134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.563147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.563163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.563182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.563198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.563212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.563227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.564 [2024-05-15 00:36:24.563241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.564 [2024-05-15 00:36:24.563256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.563972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.563986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.564003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.564017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.564033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.564046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.564063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.564077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.564093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.564106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.564121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.564135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.564150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.564164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.564179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.564193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.564208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.564222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.564237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.564251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.565 [2024-05-15 00:36:24.564266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.565 [2024-05-15 00:36:24.564279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.564900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.564915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06e60 is same with the state(5) to be set 00:20:58.566 [2024-05-15 00:36:24.566198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.566 [2024-05-15 00:36:24.566785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.566 [2024-05-15 00:36:24.566798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.566814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.566828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.566843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.566857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.566872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.566886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.566901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.566915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.566936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.566952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.566968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.566982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.566997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.567960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.567975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d56050 is same with the state(5) to be set 00:20:58.567 [2024-05-15 00:36:24.569206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.567 [2024-05-15 00:36:24.569230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.567 [2024-05-15 00:36:24.569251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.569984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.569998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.568 [2024-05-15 00:36:24.570474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.568 [2024-05-15 00:36:24.570489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.570981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.570996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.571009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.571025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.571038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.571054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.571068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.571083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.571097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.571113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.571126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.571140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6e7a0 is same with the state(5) to be set 00:20:58.569 [2024-05-15 00:36:24.572372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.569 [2024-05-15 00:36:24.572758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.569 [2024-05-15 00:36:24.572774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.572787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.572803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.572816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.572831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.572845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.572860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.572874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.572889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.572907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.572922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.572944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.572961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.572975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.572991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.570 [2024-05-15 00:36:24.573978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.570 [2024-05-15 00:36:24.573994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.574010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.574024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.574043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.574058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.574074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.574088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.574105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.574119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.574135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.574149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.574166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.574180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.574196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.574210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.574227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.574241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.574257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.574271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.574288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.574302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.574317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6fca0 is same with the state(5) to be set 00:20:58.571 [2024-05-15 00:36:24.575571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.575979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.575995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.571 [2024-05-15 00:36:24.576497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.571 [2024-05-15 00:36:24.576511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.576980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.576994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.577491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.577505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x269a000 is same with the state(5) to be set 00:20:58.572 [2024-05-15 00:36:24.578741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.578763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.578784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.578804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.578821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.578835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.578850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.578864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.578879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.578893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.578908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.578922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.572 [2024-05-15 00:36:24.578943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.572 [2024-05-15 00:36:24.578958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.578973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.578987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.579983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.579998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.573 [2024-05-15 00:36:24.580012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.573 [2024-05-15 00:36:24.580027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.580663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.580681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2840a30 is same with the state(5) to be set 00:20:58.574 [2024-05-15 00:36:24.581925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.581954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.581975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.581991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.574 [2024-05-15 00:36:24.582494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.574 [2024-05-15 00:36:24.582509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.582974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.582988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.575 [2024-05-15 00:36:24.583810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.575 [2024-05-15 00:36:24.583825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.576 [2024-05-15 00:36:24.583843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.576 [2024-05-15 00:36:24.583860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.576 [2024-05-15 00:36:24.583874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.576 [2024-05-15 00:36:24.583889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81650 is same with the state(5) to be set 00:20:58.576 [2024-05-15 00:36:24.586248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:58.576 [2024-05-15 00:36:24.586281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:58.576 [2024-05-15 00:36:24.586301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:58.576 [2024-05-15 00:36:24.586320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:58.576 [2024-05-15 00:36:24.586443] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:58.576 [2024-05-15 00:36:24.586471] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:58.576 [2024-05-15 00:36:24.586490] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:58.576 [2024-05-15 00:36:24.586600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:58.576 [2024-05-15 00:36:24.586624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:58.576 task offset: 16512 on job bdev=Nvme10n1 fails 00:20:58.576 00:20:58.576 Latency(us) 00:20:58.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.576 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.576 Job: Nvme1n1 ended in about 0.92 seconds with error 00:20:58.576 Verification LBA range: start 0x0 length 0x400 00:20:58.576 Nvme1n1 : 0.92 139.09 8.69 69.55 0.00 303337.37 13592.65 343311.55 00:20:58.576 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.576 Job: Nvme2n1 ended in about 0.93 seconds with error 00:20:58.576 Verification LBA range: start 0x0 length 0x400 00:20:58.576 Nvme2n1 : 0.93 137.91 8.62 68.95 0.00 299687.63 52817.16 330883.98 00:20:58.576 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.576 Job: Nvme3n1 ended in about 0.93 seconds with error 00:20:58.576 Verification LBA range: start 0x0 length 0x400 00:20:58.576 Nvme3n1 : 0.93 74.10 4.63 63.36 0.00 440979.91 48351.00 355739.12 00:20:58.576 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.576 Job: Nvme4n1 ended in about 0.91 seconds with error 00:20:58.576 Verification LBA range: start 0x0 length 0x400 00:20:58.576 Nvme4n1 : 0.91 140.34 8.77 70.17 0.00 281728.19 10825.58 351078.78 00:20:58.576 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.576 Job: Nvme5n1 ended in about 0.93 seconds with error 00:20:58.576 Verification LBA range: start 0x0 length 0x400 00:20:58.576 Nvme5n1 : 0.93 68.50 4.28 68.50 0.00 424506.41 82721.00 313796.08 00:20:58.576 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.576 Job: Nvme6n1 ended in about 0.94 seconds with error 00:20:58.576 Verification LBA range: start 0x0 length 0x400 00:20:58.576 Nvme6n1 : 0.94 68.27 4.27 68.27 0.00 416877.99 39807.05 337097.77 00:20:58.576 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.576 Job: Nvme7n1 ended in about 0.94 seconds with error 00:20:58.576 Verification LBA range: start 0x0 length 0x400 00:20:58.576 Nvme7n1 : 0.94 68.04 4.25 68.04 0.00 409428.57 20874.43 441178.64 00:20:58.576 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.576 Job: Nvme8n1 ended in about 0.94 seconds with error 00:20:58.576 Verification LBA range: start 0x0 length 0x400 00:20:58.576 Nvme8n1 : 0.94 67.81 4.24 67.81 0.00 402103.56 26020.22 374380.47 00:20:58.576 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.576 Job: Nvme9n1 ended in about 0.95 seconds with error 00:20:58.576 Verification LBA range: start 0x0 length 0x400 00:20:58.576 Nvme9n1 : 0.95 67.58 4.22 67.58 0.00 394741.76 23981.32 388361.48 00:20:58.576 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:58.576 Job: Nvme10n1 ended in about 0.91 seconds with error 00:20:58.576 Verification LBA range: start 0x0 length 0x400 00:20:58.576 Nvme10n1 : 0.91 140.62 8.79 70.31 0.00 245056.98 8107.05 333990.87 00:20:58.576 =================================================================================================================== 00:20:58.576 Total : 972.24 60.77 682.52 0.00 348612.79 8107.05 441178.64 00:20:58.576 [2024-05-15 00:36:24.617170] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:58.576 [2024-05-15 00:36:24.617259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:58.576 [2024-05-15 00:36:24.617651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.617844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.617872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3ed10 with addr=10.0.0.2, port=4420 00:20:58.576 [2024-05-15 00:36:24.617895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3ed10 is same with the state(5) to be set 00:20:58.576 [2024-05-15 00:36:24.618088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.618262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.618288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d82ec0 with addr=10.0.0.2, port=4420 00:20:58.576 [2024-05-15 00:36:24.618304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d82ec0 is same with the state(5) to be set 00:20:58.576 [2024-05-15 00:36:24.618587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.618759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.618785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8fc00 with addr=10.0.0.2, port=4420 00:20:58.576 [2024-05-15 00:36:24.618800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8fc00 is same with the state(5) to be set 00:20:58.576 [2024-05-15 00:36:24.618967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.619135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.619160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2e8b0 with addr=10.0.0.2, port=4420 00:20:58.576 [2024-05-15 00:36:24.619176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2e8b0 is same with the state(5) to be set 00:20:58.576 [2024-05-15 00:36:24.621119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:58.576 [2024-05-15 00:36:24.621147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:58.576 [2024-05-15 00:36:24.621393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.621583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.621609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbcdc0 with addr=10.0.0.2, port=4420 00:20:58.576 [2024-05-15 00:36:24.621632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbcdc0 is same with the state(5) to be set 00:20:58.576 [2024-05-15 00:36:24.621798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.621958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.621983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbd0d0 with addr=10.0.0.2, port=4420 00:20:58.576 [2024-05-15 00:36:24.622006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbd0d0 is same with the state(5) to be set 00:20:58.576 [2024-05-15 00:36:24.622158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.622308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.622333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2da40 with addr=10.0.0.2, port=4420 00:20:58.576 [2024-05-15 00:36:24.622348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2da40 is same with the state(5) to be set 00:20:58.576 [2024-05-15 00:36:24.622376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3ed10 (9): Bad file descriptor 00:20:58.576 [2024-05-15 00:36:24.622401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d82ec0 (9): Bad file descriptor 00:20:58.576 [2024-05-15 00:36:24.622418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8fc00 (9): Bad file descriptor 00:20:58.576 [2024-05-15 00:36:24.622436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2e8b0 (9): Bad file descriptor 00:20:58.576 [2024-05-15 00:36:24.622487] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:58.576 [2024-05-15 00:36:24.622513] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:58.576 [2024-05-15 00:36:24.622533] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:58.576 [2024-05-15 00:36:24.622552] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:58.576 [2024-05-15 00:36:24.622570] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:58.576 [2024-05-15 00:36:24.622654] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:58.576 [2024-05-15 00:36:24.622854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.623021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.623048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d747c0 with addr=10.0.0.2, port=4420 00:20:58.576 [2024-05-15 00:36:24.623063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d747c0 is same with the state(5) to be set 00:20:58.576 [2024-05-15 00:36:24.623225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.623383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.576 [2024-05-15 00:36:24.623407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8f6d0 with addr=10.0.0.2, port=4420 00:20:58.576 [2024-05-15 00:36:24.623422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8f6d0 is same with the state(5) to be set 00:20:58.576 [2024-05-15 00:36:24.623440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbcdc0 (9): Bad file descriptor 00:20:58.576 [2024-05-15 00:36:24.623459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbd0d0 (9): Bad file descriptor 00:20:58.576 [2024-05-15 00:36:24.623477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2da40 (9): Bad file descriptor 00:20:58.576 [2024-05-15 00:36:24.623501] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:58.576 [2024-05-15 00:36:24.623515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:58.576 [2024-05-15 00:36:24.623533] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:58.576 [2024-05-15 00:36:24.623551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:58.576 [2024-05-15 00:36:24.623565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:58.576 [2024-05-15 00:36:24.623577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:58.576 [2024-05-15 00:36:24.623594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:58.576 [2024-05-15 00:36:24.623608] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:58.576 [2024-05-15 00:36:24.623620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:58.577 [2024-05-15 00:36:24.623636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:58.577 [2024-05-15 00:36:24.623648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:58.577 [2024-05-15 00:36:24.623662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:58.577 [2024-05-15 00:36:24.623768] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.577 [2024-05-15 00:36:24.623788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.577 [2024-05-15 00:36:24.623801] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.577 [2024-05-15 00:36:24.623813] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.577 [2024-05-15 00:36:24.623976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.577 [2024-05-15 00:36:24.624181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.577 [2024-05-15 00:36:24.624205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddfa50 with addr=10.0.0.2, port=4420 00:20:58.577 [2024-05-15 00:36:24.624221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfa50 is same with the state(5) to be set 00:20:58.577 [2024-05-15 00:36:24.624239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d747c0 (9): Bad file descriptor 00:20:58.577 [2024-05-15 00:36:24.624258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8f6d0 (9): Bad file descriptor 00:20:58.577 [2024-05-15 00:36:24.624274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:58.577 [2024-05-15 00:36:24.624286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:58.577 [2024-05-15 00:36:24.624299] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:58.577 [2024-05-15 00:36:24.624317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:58.577 [2024-05-15 00:36:24.624330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:58.577 [2024-05-15 00:36:24.624343] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:58.577 [2024-05-15 00:36:24.624358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:58.577 [2024-05-15 00:36:24.624371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:58.577 [2024-05-15 00:36:24.624384] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:58.577 [2024-05-15 00:36:24.624426] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.577 [2024-05-15 00:36:24.624444] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.577 [2024-05-15 00:36:24.624456] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.577 [2024-05-15 00:36:24.624472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddfa50 (9): Bad file descriptor 00:20:58.577 [2024-05-15 00:36:24.624488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:58.577 [2024-05-15 00:36:24.624500] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:58.577 [2024-05-15 00:36:24.624513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:58.577 [2024-05-15 00:36:24.624529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:58.577 [2024-05-15 00:36:24.624542] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:58.577 [2024-05-15 00:36:24.624555] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:58.577 [2024-05-15 00:36:24.624594] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.577 [2024-05-15 00:36:24.624611] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.577 [2024-05-15 00:36:24.624624] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:58.577 [2024-05-15 00:36:24.624636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:58.577 [2024-05-15 00:36:24.624648] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:58.577 [2024-05-15 00:36:24.624686] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:59.145 00:36:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:59.145 00:36:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:00.087 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 927691 00:21:00.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (927691) - No such process 00:21:00.087 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:00.087 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:00.087 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:00.088 rmmod nvme_tcp 00:21:00.088 rmmod nvme_fabrics 00:21:00.088 rmmod nvme_keyring 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.088 00:36:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.624 00:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:02.624 00:21:02.624 real 0m7.552s 00:21:02.624 user 0m17.867s 00:21:02.624 sys 0m1.564s 00:21:02.624 00:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:02.624 00:36:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.624 ************************************ 00:21:02.624 END TEST nvmf_shutdown_tc3 00:21:02.624 ************************************ 00:21:02.624 00:36:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:02.624 00:21:02.624 real 0m28.010s 00:21:02.624 user 1m15.430s 00:21:02.624 sys 0m6.849s 00:21:02.624 00:36:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:02.624 00:36:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:02.624 ************************************ 00:21:02.624 END TEST nvmf_shutdown 00:21:02.624 ************************************ 00:21:02.624 00:36:28 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:21:02.624 00:36:28 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:02.624 00:36:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:02.624 00:36:28 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:21:02.624 00:36:28 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:02.624 00:36:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:02.624 00:36:28 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:21:02.624 00:36:28 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:02.624 00:36:28 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:02.624 00:36:28 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:02.624 00:36:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:02.624 ************************************ 00:21:02.624 START TEST nvmf_multicontroller 00:21:02.624 ************************************ 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:02.624 * Looking for test storage... 00:21:02.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.624 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:02.625 00:36:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:05.156 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:05.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:05.157 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:05.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:05.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.157 00:36:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:05.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:21:05.157 00:21:05.157 --- 10.0.0.2 ping statistics --- 00:21:05.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.157 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:21:05.157 00:21:05.157 --- 10.0.0.1 ping statistics --- 00:21:05.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.157 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=930512 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 930512 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 930512 ']' 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:05.157 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.157 [2024-05-15 00:36:31.074876] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:21:05.157 [2024-05-15 00:36:31.074975] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.157 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.157 [2024-05-15 00:36:31.151025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:05.157 [2024-05-15 00:36:31.256404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.157 [2024-05-15 00:36:31.256473] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.157 [2024-05-15 00:36:31.256487] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.157 [2024-05-15 00:36:31.256498] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.158 [2024-05-15 00:36:31.256522] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.158 [2024-05-15 00:36:31.256631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.158 [2024-05-15 00:36:31.256690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.158 [2024-05-15 00:36:31.256693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.416 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:05.416 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:21:05.416 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:05.416 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:05.416 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 [2024-05-15 00:36:31.385697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 Malloc0 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 [2024-05-15 00:36:31.444459] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:05.417 [2024-05-15 00:36:31.444745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 [2024-05-15 00:36:31.452580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 Malloc1 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=930534 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 930534 /var/tmp/bdevperf.sock 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 930534 ']' 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:05.417 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.984 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:05.984 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:21:05.984 00:36:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:05.984 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.984 00:36:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.984 NVMe0n1 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.984 1 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.984 request: 00:21:05.984 { 00:21:05.984 "name": "NVMe0", 00:21:05.984 "trtype": "tcp", 00:21:05.984 "traddr": "10.0.0.2", 00:21:05.984 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:05.984 "hostaddr": "10.0.0.2", 00:21:05.984 "hostsvcid": "60000", 00:21:05.984 "adrfam": "ipv4", 00:21:05.984 "trsvcid": "4420", 00:21:05.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.984 "method": "bdev_nvme_attach_controller", 00:21:05.984 "req_id": 1 00:21:05.984 } 00:21:05.984 Got JSON-RPC error response 00:21:05.984 response: 00:21:05.984 { 00:21:05.984 "code": -114, 00:21:05.984 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:05.984 } 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.984 request: 00:21:05.984 { 00:21:05.984 "name": "NVMe0", 00:21:05.984 "trtype": "tcp", 00:21:05.984 "traddr": "10.0.0.2", 00:21:05.984 "hostaddr": "10.0.0.2", 00:21:05.984 "hostsvcid": "60000", 00:21:05.984 "adrfam": "ipv4", 00:21:05.984 "trsvcid": "4420", 00:21:05.984 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:05.984 "method": "bdev_nvme_attach_controller", 00:21:05.984 "req_id": 1 00:21:05.984 } 00:21:05.984 Got JSON-RPC error response 00:21:05.984 response: 00:21:05.984 { 00:21:05.984 "code": -114, 00:21:05.984 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:05.984 } 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.984 request: 00:21:05.984 { 00:21:05.984 "name": "NVMe0", 00:21:05.984 "trtype": "tcp", 00:21:05.984 "traddr": "10.0.0.2", 00:21:05.984 "hostaddr": "10.0.0.2", 00:21:05.984 "hostsvcid": "60000", 00:21:05.984 "adrfam": "ipv4", 00:21:05.984 "trsvcid": "4420", 00:21:05.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.984 "multipath": "disable", 00:21:05.984 "method": "bdev_nvme_attach_controller", 00:21:05.984 "req_id": 1 00:21:05.984 } 00:21:05.984 Got JSON-RPC error response 00:21:05.984 response: 00:21:05.984 { 00:21:05.984 "code": -114, 00:21:05.984 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:05.984 } 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.984 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:05.985 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.985 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.985 request: 00:21:05.985 { 00:21:05.985 "name": "NVMe0", 00:21:05.985 "trtype": "tcp", 00:21:05.985 "traddr": "10.0.0.2", 00:21:05.985 "hostaddr": "10.0.0.2", 00:21:05.985 "hostsvcid": "60000", 00:21:05.985 "adrfam": "ipv4", 00:21:05.985 "trsvcid": "4420", 00:21:05.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.985 "multipath": "failover", 00:21:05.985 "method": "bdev_nvme_attach_controller", 00:21:05.985 "req_id": 1 00:21:05.985 } 00:21:05.985 Got JSON-RPC error response 00:21:05.985 response: 00:21:05.985 { 00:21:05.985 "code": -114, 00:21:05.985 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:05.985 } 00:21:05.985 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:05.985 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:21:05.985 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:05.985 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:05.985 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:05.985 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.985 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.985 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:06.242 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:06.242 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:06.242 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.243 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:06.243 00:36:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.243 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:06.243 00:36:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:07.615 0 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 930534 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 930534 ']' 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 930534 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 930534 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 930534' 00:21:07.616 killing process with pid 930534 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 930534 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 930534 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # sort -u 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # cat 00:21:07.616 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:07.616 [2024-05-15 00:36:31.555993] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:21:07.616 [2024-05-15 00:36:31.556079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930534 ] 00:21:07.616 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.616 [2024-05-15 00:36:31.626167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.616 [2024-05-15 00:36:31.738082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.616 [2024-05-15 00:36:32.263339] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name d3df87a6-959e-4841-8613-437de7767e01 already exists 00:21:07.616 [2024-05-15 00:36:32.263381] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:d3df87a6-959e-4841-8613-437de7767e01 alias for bdev NVMe1n1 00:21:07.616 [2024-05-15 00:36:32.263416] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:07.616 Running I/O for 1 seconds... 00:21:07.616 00:21:07.616 Latency(us) 00:21:07.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.616 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:07.616 NVMe0n1 : 1.01 18768.28 73.31 0.00 0.00 6801.48 3422.44 10922.67 00:21:07.616 =================================================================================================================== 00:21:07.616 Total : 18768.28 73.31 0.00 0.00 6801.48 3422.44 10922.67 00:21:07.616 Received shutdown signal, test time was about 1.000000 seconds 00:21:07.616 00:21:07.616 Latency(us) 00:21:07.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.616 =================================================================================================================== 00:21:07.616 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.616 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1615 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.616 00:36:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.873 rmmod nvme_tcp 00:21:07.873 rmmod nvme_fabrics 00:21:07.873 rmmod nvme_keyring 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 930512 ']' 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 930512 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 930512 ']' 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 930512 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 930512 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:07.873 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:07.874 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 930512' 00:21:07.874 killing process with pid 930512 00:21:07.874 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 930512 00:21:07.874 [2024-05-15 00:36:33.848371] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:07.874 00:36:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 930512 00:21:08.132 00:36:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:08.132 00:36:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:08.132 00:36:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:08.132 00:36:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.132 00:36:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:08.132 00:36:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.132 00:36:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.132 00:36:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.665 00:36:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.665 00:21:10.665 real 0m7.869s 00:21:10.665 user 0m11.705s 00:21:10.665 sys 0m2.591s 00:21:10.665 00:36:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:10.665 00:36:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:10.665 ************************************ 00:21:10.665 END TEST nvmf_multicontroller 00:21:10.665 ************************************ 00:21:10.665 00:36:36 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:10.665 00:36:36 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:10.666 00:36:36 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:10.666 00:36:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:10.666 ************************************ 00:21:10.666 START TEST nvmf_aer 00:21:10.666 ************************************ 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:10.666 * Looking for test storage... 00:21:10.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.666 00:36:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:13.199 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:13.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:13.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:13.199 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:13.200 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.200 00:36:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:13.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:21:13.200 00:21:13.200 --- 10.0.0.2 ping statistics --- 00:21:13.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.200 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:21:13.200 00:21:13.200 --- 10.0.0.1 ping statistics --- 00:21:13.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.200 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=933161 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 933161 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # '[' -z 933161 ']' 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:13.200 00:36:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.200 [2024-05-15 00:36:39.088056] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:21:13.200 [2024-05-15 00:36:39.088130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.200 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.200 [2024-05-15 00:36:39.168634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.200 [2024-05-15 00:36:39.286244] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.200 [2024-05-15 00:36:39.286312] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.200 [2024-05-15 00:36:39.286329] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.200 [2024-05-15 00:36:39.286343] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.200 [2024-05-15 00:36:39.286354] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.200 [2024-05-15 00:36:39.286439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.200 [2024-05-15 00:36:39.286511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.200 [2024-05-15 00:36:39.286602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.200 [2024-05-15 00:36:39.286604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@861 -- # return 0 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.135 [2024-05-15 00:36:40.054864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.135 Malloc0 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.135 [2024-05-15 00:36:40.106339] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:14.135 [2024-05-15 00:36:40.106659] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.135 [ 00:21:14.135 { 00:21:14.135 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:14.135 "subtype": "Discovery", 00:21:14.135 "listen_addresses": [], 00:21:14.135 "allow_any_host": true, 00:21:14.135 "hosts": [] 00:21:14.135 }, 00:21:14.135 { 00:21:14.135 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.135 "subtype": "NVMe", 00:21:14.135 "listen_addresses": [ 00:21:14.135 { 00:21:14.135 "trtype": "TCP", 00:21:14.135 "adrfam": "IPv4", 00:21:14.135 "traddr": "10.0.0.2", 00:21:14.135 "trsvcid": "4420" 00:21:14.135 } 00:21:14.135 ], 00:21:14.135 "allow_any_host": true, 00:21:14.135 "hosts": [], 00:21:14.135 "serial_number": "SPDK00000000000001", 00:21:14.135 "model_number": "SPDK bdev Controller", 00:21:14.135 "max_namespaces": 2, 00:21:14.135 "min_cntlid": 1, 00:21:14.135 "max_cntlid": 65519, 00:21:14.135 "namespaces": [ 00:21:14.135 { 00:21:14.135 "nsid": 1, 00:21:14.135 "bdev_name": "Malloc0", 00:21:14.135 "name": "Malloc0", 00:21:14.135 "nguid": "9F75204BF2254BC78C8C15AB60CE60B9", 00:21:14.135 "uuid": "9f75204b-f225-4bc7-8c8c-15ab60ce60b9" 00:21:14.135 } 00:21:14.135 ] 00:21:14.135 } 00:21:14.135 ] 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=933317 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # local i=0 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 0 -lt 200 ']' 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=1 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:21:14.135 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 1 -lt 200 ']' 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=2 00:21:14.135 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1273 -- # return 0 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.394 Malloc1 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.394 Asynchronous Event Request test 00:21:14.394 Attaching to 10.0.0.2 00:21:14.394 Attached to 10.0.0.2 00:21:14.394 Registering asynchronous event callbacks... 00:21:14.394 Starting namespace attribute notice tests for all controllers... 00:21:14.394 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:14.394 aer_cb - Changed Namespace 00:21:14.394 Cleaning up... 00:21:14.394 [ 00:21:14.394 { 00:21:14.394 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:14.394 "subtype": "Discovery", 00:21:14.394 "listen_addresses": [], 00:21:14.394 "allow_any_host": true, 00:21:14.394 "hosts": [] 00:21:14.394 }, 00:21:14.394 { 00:21:14.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.394 "subtype": "NVMe", 00:21:14.394 "listen_addresses": [ 00:21:14.394 { 00:21:14.394 "trtype": "TCP", 00:21:14.394 "adrfam": "IPv4", 00:21:14.394 "traddr": "10.0.0.2", 00:21:14.394 "trsvcid": "4420" 00:21:14.394 } 00:21:14.394 ], 00:21:14.394 "allow_any_host": true, 00:21:14.394 "hosts": [], 00:21:14.394 "serial_number": "SPDK00000000000001", 00:21:14.394 "model_number": "SPDK bdev Controller", 00:21:14.394 "max_namespaces": 2, 00:21:14.394 "min_cntlid": 1, 00:21:14.394 "max_cntlid": 65519, 00:21:14.394 "namespaces": [ 00:21:14.394 { 00:21:14.394 "nsid": 1, 00:21:14.394 "bdev_name": "Malloc0", 00:21:14.394 "name": "Malloc0", 00:21:14.394 "nguid": "9F75204BF2254BC78C8C15AB60CE60B9", 00:21:14.394 "uuid": "9f75204b-f225-4bc7-8c8c-15ab60ce60b9" 00:21:14.394 }, 00:21:14.394 { 00:21:14.394 "nsid": 2, 00:21:14.394 "bdev_name": "Malloc1", 00:21:14.394 "name": "Malloc1", 00:21:14.394 "nguid": "82872E913B8A4FF6AC91E4B2DFF861CD", 00:21:14.394 "uuid": "82872e91-3b8a-4ff6-ac91-e4b2dff861cd" 00:21:14.394 } 00:21:14.394 ] 00:21:14.394 } 00:21:14.394 ] 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 933317 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.394 rmmod nvme_tcp 00:21:14.394 rmmod nvme_fabrics 00:21:14.394 rmmod nvme_keyring 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 933161 ']' 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 933161 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' -z 933161 ']' 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # kill -0 933161 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # uname 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:14.394 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 933161 00:21:14.652 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:14.652 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:14.652 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # echo 'killing process with pid 933161' 00:21:14.652 killing process with pid 933161 00:21:14.652 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # kill 933161 00:21:14.652 [2024-05-15 00:36:40.566208] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:14.652 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@971 -- # wait 933161 00:21:14.911 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:14.911 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:14.911 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:14.911 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.911 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.911 00:36:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.911 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.911 00:36:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.814 00:36:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:16.814 00:21:16.814 real 0m6.622s 00:21:16.814 user 0m7.110s 00:21:16.814 sys 0m2.343s 00:21:16.814 00:36:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:16.814 00:36:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:16.814 ************************************ 00:21:16.814 END TEST nvmf_aer 00:21:16.814 ************************************ 00:21:16.814 00:36:42 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:16.814 00:36:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:16.814 00:36:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:16.814 00:36:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:16.814 ************************************ 00:21:16.814 START TEST nvmf_async_init 00:21:16.814 ************************************ 00:21:16.814 00:36:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:17.072 * Looking for test storage... 00:21:17.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6c971b6a09a14f76999a2a4baa7f02b3 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:17.072 00:36:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:19.601 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:19.602 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:19.602 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:19.602 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:19.602 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:19.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:21:19.602 00:21:19.602 --- 10.0.0.2 ping statistics --- 00:21:19.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.602 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:21:19.602 00:21:19.602 --- 10.0.0.1 ping statistics --- 00:21:19.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.602 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=935659 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 935659 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # '[' -z 935659 ']' 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:19.602 00:36:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:19.602 [2024-05-15 00:36:45.723847] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:21:19.602 [2024-05-15 00:36:45.723947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.602 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.860 [2024-05-15 00:36:45.804868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.860 [2024-05-15 00:36:45.922254] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.860 [2024-05-15 00:36:45.922317] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.860 [2024-05-15 00:36:45.922334] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.860 [2024-05-15 00:36:45.922347] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.860 [2024-05-15 00:36:45.922359] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.860 [2024-05-15 00:36:45.922395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@861 -- # return 0 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:20.792 [2024-05-15 00:36:46.689841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:20.792 null0 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6c971b6a09a14f76999a2a4baa7f02b3 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:20.792 [2024-05-15 00:36:46.729871] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:20.792 [2024-05-15 00:36:46.730147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.792 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.049 nvme0n1 00:21:21.049 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.049 00:36:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:21.049 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.049 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.049 [ 00:21:21.049 { 00:21:21.049 "name": "nvme0n1", 00:21:21.049 "aliases": [ 00:21:21.049 "6c971b6a-09a1-4f76-999a-2a4baa7f02b3" 00:21:21.049 ], 00:21:21.049 "product_name": "NVMe disk", 00:21:21.049 "block_size": 512, 00:21:21.049 "num_blocks": 2097152, 00:21:21.049 "uuid": "6c971b6a-09a1-4f76-999a-2a4baa7f02b3", 00:21:21.049 "assigned_rate_limits": { 00:21:21.049 "rw_ios_per_sec": 0, 00:21:21.049 "rw_mbytes_per_sec": 0, 00:21:21.049 "r_mbytes_per_sec": 0, 00:21:21.049 "w_mbytes_per_sec": 0 00:21:21.049 }, 00:21:21.049 "claimed": false, 00:21:21.049 "zoned": false, 00:21:21.049 "supported_io_types": { 00:21:21.049 "read": true, 00:21:21.049 "write": true, 00:21:21.049 "unmap": false, 00:21:21.049 "write_zeroes": true, 00:21:21.049 "flush": true, 00:21:21.049 "reset": true, 00:21:21.049 "compare": true, 00:21:21.049 "compare_and_write": true, 00:21:21.049 "abort": true, 00:21:21.049 "nvme_admin": true, 00:21:21.049 "nvme_io": true 00:21:21.049 }, 00:21:21.049 "memory_domains": [ 00:21:21.049 { 00:21:21.049 "dma_device_id": "system", 00:21:21.049 "dma_device_type": 1 00:21:21.049 } 00:21:21.049 ], 00:21:21.049 "driver_specific": { 00:21:21.049 "nvme": [ 00:21:21.049 { 00:21:21.049 "trid": { 00:21:21.049 "trtype": "TCP", 00:21:21.049 "adrfam": "IPv4", 00:21:21.049 "traddr": "10.0.0.2", 00:21:21.049 "trsvcid": "4420", 00:21:21.049 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:21.049 }, 00:21:21.049 "ctrlr_data": { 00:21:21.049 "cntlid": 1, 00:21:21.049 "vendor_id": "0x8086", 00:21:21.049 "model_number": "SPDK bdev Controller", 00:21:21.049 "serial_number": "00000000000000000000", 00:21:21.049 "firmware_revision": "24.05", 00:21:21.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:21.049 "oacs": { 00:21:21.049 "security": 0, 00:21:21.049 "format": 0, 00:21:21.049 "firmware": 0, 00:21:21.049 "ns_manage": 0 00:21:21.049 }, 00:21:21.049 "multi_ctrlr": true, 00:21:21.049 "ana_reporting": false 00:21:21.049 }, 00:21:21.049 "vs": { 00:21:21.049 "nvme_version": "1.3" 00:21:21.049 }, 00:21:21.049 "ns_data": { 00:21:21.049 "id": 1, 00:21:21.049 "can_share": true 00:21:21.049 } 00:21:21.049 } 00:21:21.049 ], 00:21:21.049 "mp_policy": "active_passive" 00:21:21.049 } 00:21:21.049 } 00:21:21.049 ] 00:21:21.049 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.049 00:36:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:21.049 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.049 00:36:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.049 [2024-05-15 00:36:46.978610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:21.049 [2024-05-15 00:36:46.978689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77cbe0 (9): Bad file descriptor 00:21:21.049 [2024-05-15 00:36:47.111095] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.049 [ 00:21:21.049 { 00:21:21.049 "name": "nvme0n1", 00:21:21.049 "aliases": [ 00:21:21.049 "6c971b6a-09a1-4f76-999a-2a4baa7f02b3" 00:21:21.049 ], 00:21:21.049 "product_name": "NVMe disk", 00:21:21.049 "block_size": 512, 00:21:21.049 "num_blocks": 2097152, 00:21:21.049 "uuid": "6c971b6a-09a1-4f76-999a-2a4baa7f02b3", 00:21:21.049 "assigned_rate_limits": { 00:21:21.049 "rw_ios_per_sec": 0, 00:21:21.049 "rw_mbytes_per_sec": 0, 00:21:21.049 "r_mbytes_per_sec": 0, 00:21:21.049 "w_mbytes_per_sec": 0 00:21:21.049 }, 00:21:21.049 "claimed": false, 00:21:21.049 "zoned": false, 00:21:21.049 "supported_io_types": { 00:21:21.049 "read": true, 00:21:21.049 "write": true, 00:21:21.049 "unmap": false, 00:21:21.049 "write_zeroes": true, 00:21:21.049 "flush": true, 00:21:21.049 "reset": true, 00:21:21.049 "compare": true, 00:21:21.049 "compare_and_write": true, 00:21:21.049 "abort": true, 00:21:21.049 "nvme_admin": true, 00:21:21.049 "nvme_io": true 00:21:21.049 }, 00:21:21.049 "memory_domains": [ 00:21:21.049 { 00:21:21.049 "dma_device_id": "system", 00:21:21.049 "dma_device_type": 1 00:21:21.049 } 00:21:21.049 ], 00:21:21.049 "driver_specific": { 00:21:21.049 "nvme": [ 00:21:21.049 { 00:21:21.049 "trid": { 00:21:21.049 "trtype": "TCP", 00:21:21.049 "adrfam": "IPv4", 00:21:21.049 "traddr": "10.0.0.2", 00:21:21.049 "trsvcid": "4420", 00:21:21.049 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:21.049 }, 00:21:21.049 "ctrlr_data": { 00:21:21.049 "cntlid": 2, 00:21:21.049 "vendor_id": "0x8086", 00:21:21.049 "model_number": "SPDK bdev Controller", 00:21:21.049 "serial_number": "00000000000000000000", 00:21:21.049 "firmware_revision": "24.05", 00:21:21.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:21.049 "oacs": { 00:21:21.049 "security": 0, 00:21:21.049 "format": 0, 00:21:21.049 "firmware": 0, 00:21:21.049 "ns_manage": 0 00:21:21.049 }, 00:21:21.049 "multi_ctrlr": true, 00:21:21.049 "ana_reporting": false 00:21:21.049 }, 00:21:21.049 "vs": { 00:21:21.049 "nvme_version": "1.3" 00:21:21.049 }, 00:21:21.049 "ns_data": { 00:21:21.049 "id": 1, 00:21:21.049 "can_share": true 00:21:21.049 } 00:21:21.049 } 00:21:21.049 ], 00:21:21.049 "mp_policy": "active_passive" 00:21:21.049 } 00:21:21.049 } 00:21:21.049 ] 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.DaHdqNHgp3 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.DaHdqNHgp3 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.049 [2024-05-15 00:36:47.163261] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.049 [2024-05-15 00:36:47.163439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DaHdqNHgp3 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.049 [2024-05-15 00:36:47.171287] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DaHdqNHgp3 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.049 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.049 [2024-05-15 00:36:47.179294] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.049 [2024-05-15 00:36:47.179353] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:21.306 nvme0n1 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.306 [ 00:21:21.306 { 00:21:21.306 "name": "nvme0n1", 00:21:21.306 "aliases": [ 00:21:21.306 "6c971b6a-09a1-4f76-999a-2a4baa7f02b3" 00:21:21.306 ], 00:21:21.306 "product_name": "NVMe disk", 00:21:21.306 "block_size": 512, 00:21:21.306 "num_blocks": 2097152, 00:21:21.306 "uuid": "6c971b6a-09a1-4f76-999a-2a4baa7f02b3", 00:21:21.306 "assigned_rate_limits": { 00:21:21.306 "rw_ios_per_sec": 0, 00:21:21.306 "rw_mbytes_per_sec": 0, 00:21:21.306 "r_mbytes_per_sec": 0, 00:21:21.306 "w_mbytes_per_sec": 0 00:21:21.306 }, 00:21:21.306 "claimed": false, 00:21:21.306 "zoned": false, 00:21:21.306 "supported_io_types": { 00:21:21.306 "read": true, 00:21:21.306 "write": true, 00:21:21.306 "unmap": false, 00:21:21.306 "write_zeroes": true, 00:21:21.306 "flush": true, 00:21:21.306 "reset": true, 00:21:21.306 "compare": true, 00:21:21.306 "compare_and_write": true, 00:21:21.306 "abort": true, 00:21:21.306 "nvme_admin": true, 00:21:21.306 "nvme_io": true 00:21:21.306 }, 00:21:21.306 "memory_domains": [ 00:21:21.306 { 00:21:21.306 "dma_device_id": "system", 00:21:21.306 "dma_device_type": 1 00:21:21.306 } 00:21:21.306 ], 00:21:21.306 "driver_specific": { 00:21:21.306 "nvme": [ 00:21:21.306 { 00:21:21.306 "trid": { 00:21:21.306 "trtype": "TCP", 00:21:21.306 "adrfam": "IPv4", 00:21:21.306 "traddr": "10.0.0.2", 00:21:21.306 "trsvcid": "4421", 00:21:21.306 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:21.306 }, 00:21:21.306 "ctrlr_data": { 00:21:21.306 "cntlid": 3, 00:21:21.306 "vendor_id": "0x8086", 00:21:21.306 "model_number": "SPDK bdev Controller", 00:21:21.306 "serial_number": "00000000000000000000", 00:21:21.306 "firmware_revision": "24.05", 00:21:21.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:21.306 "oacs": { 00:21:21.306 "security": 0, 00:21:21.306 "format": 0, 00:21:21.306 "firmware": 0, 00:21:21.306 "ns_manage": 0 00:21:21.306 }, 00:21:21.306 "multi_ctrlr": true, 00:21:21.306 "ana_reporting": false 00:21:21.306 }, 00:21:21.306 "vs": { 00:21:21.306 "nvme_version": "1.3" 00:21:21.306 }, 00:21:21.306 "ns_data": { 00:21:21.306 "id": 1, 00:21:21.306 "can_share": true 00:21:21.306 } 00:21:21.306 } 00:21:21.306 ], 00:21:21.306 "mp_policy": "active_passive" 00:21:21.306 } 00:21:21.306 } 00:21:21.306 ] 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.DaHdqNHgp3 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:21.306 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:21.307 rmmod nvme_tcp 00:21:21.307 rmmod nvme_fabrics 00:21:21.307 rmmod nvme_keyring 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 935659 ']' 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 935659 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' -z 935659 ']' 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # kill -0 935659 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # uname 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 935659 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 935659' 00:21:21.307 killing process with pid 935659 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # kill 935659 00:21:21.307 [2024-05-15 00:36:47.362053] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:21.307 [2024-05-15 00:36:47.362087] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:21.307 [2024-05-15 00:36:47.362102] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:21.307 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@971 -- # wait 935659 00:21:21.565 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:21.565 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:21.565 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:21.565 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:21.565 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:21.565 00:36:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.565 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.565 00:36:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.098 00:36:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:24.098 00:21:24.098 real 0m6.706s 00:21:24.098 user 0m3.096s 00:21:24.098 sys 0m2.229s 00:21:24.098 00:36:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:24.098 00:36:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:24.098 ************************************ 00:21:24.098 END TEST nvmf_async_init 00:21:24.098 ************************************ 00:21:24.098 00:36:49 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:24.098 00:36:49 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:24.098 00:36:49 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:24.098 00:36:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:24.098 ************************************ 00:21:24.098 START TEST dma 00:21:24.098 ************************************ 00:21:24.098 00:36:49 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:24.098 * Looking for test storage... 00:21:24.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:24.098 00:36:49 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.098 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.098 00:36:49 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.098 00:36:49 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.098 00:36:49 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.099 00:36:49 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.099 00:36:49 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.099 00:36:49 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.099 00:36:49 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:24.099 00:36:49 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.099 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:24.099 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:24.099 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:24.099 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.099 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.099 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.099 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:24.099 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:24.099 00:36:49 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:24.099 00:36:49 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:24.099 00:36:49 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:24.099 00:21:24.099 real 0m0.067s 00:21:24.099 user 0m0.030s 00:21:24.099 sys 0m0.043s 00:21:24.099 00:36:49 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:24.099 00:36:49 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:24.099 ************************************ 00:21:24.099 END TEST dma 00:21:24.099 ************************************ 00:21:24.099 00:36:49 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:24.099 00:36:49 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:24.099 00:36:49 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:24.099 00:36:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:24.099 ************************************ 00:21:24.099 START TEST nvmf_identify 00:21:24.099 ************************************ 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:24.099 * Looking for test storage... 00:21:24.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:24.099 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:24.100 00:36:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:26.630 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:26.630 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:26.630 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:26.630 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:26.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:21:26.630 00:21:26.630 --- 10.0.0.2 ping statistics --- 00:21:26.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.630 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:21:26.630 00:21:26.630 --- 10.0.0.1 ping statistics --- 00:21:26.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.630 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=938203 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 938203 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 938203 ']' 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.630 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:26.631 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.631 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:26.631 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.631 [2024-05-15 00:36:52.498579] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:21:26.631 [2024-05-15 00:36:52.498653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.631 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.631 [2024-05-15 00:36:52.581831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.631 [2024-05-15 00:36:52.701295] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.631 [2024-05-15 00:36:52.701353] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.631 [2024-05-15 00:36:52.701380] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.631 [2024-05-15 00:36:52.701394] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.631 [2024-05-15 00:36:52.701405] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.631 [2024-05-15 00:36:52.704954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.631 [2024-05-15 00:36:52.705003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.631 [2024-05-15 00:36:52.705086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.631 [2024-05-15 00:36:52.705083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.890 [2024-05-15 00:36:52.831439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.890 Malloc0 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.890 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.891 [2024-05-15 00:36:52.902145] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:26.891 [2024-05-15 00:36:52.902450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:26.891 [ 00:21:26.891 { 00:21:26.891 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:26.891 "subtype": "Discovery", 00:21:26.891 "listen_addresses": [ 00:21:26.891 { 00:21:26.891 "trtype": "TCP", 00:21:26.891 "adrfam": "IPv4", 00:21:26.891 "traddr": "10.0.0.2", 00:21:26.891 "trsvcid": "4420" 00:21:26.891 } 00:21:26.891 ], 00:21:26.891 "allow_any_host": true, 00:21:26.891 "hosts": [] 00:21:26.891 }, 00:21:26.891 { 00:21:26.891 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.891 "subtype": "NVMe", 00:21:26.891 "listen_addresses": [ 00:21:26.891 { 00:21:26.891 "trtype": "TCP", 00:21:26.891 "adrfam": "IPv4", 00:21:26.891 "traddr": "10.0.0.2", 00:21:26.891 "trsvcid": "4420" 00:21:26.891 } 00:21:26.891 ], 00:21:26.891 "allow_any_host": true, 00:21:26.891 "hosts": [], 00:21:26.891 "serial_number": "SPDK00000000000001", 00:21:26.891 "model_number": "SPDK bdev Controller", 00:21:26.891 "max_namespaces": 32, 00:21:26.891 "min_cntlid": 1, 00:21:26.891 "max_cntlid": 65519, 00:21:26.891 "namespaces": [ 00:21:26.891 { 00:21:26.891 "nsid": 1, 00:21:26.891 "bdev_name": "Malloc0", 00:21:26.891 "name": "Malloc0", 00:21:26.891 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:26.891 "eui64": "ABCDEF0123456789", 00:21:26.891 "uuid": "d859a8b4-a07c-418b-9509-dab0c51310d0" 00:21:26.891 } 00:21:26.891 ] 00:21:26.891 } 00:21:26.891 ] 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.891 00:36:52 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:26.891 [2024-05-15 00:36:52.943743] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:21:26.891 [2024-05-15 00:36:52.943783] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938232 ] 00:21:26.891 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.891 [2024-05-15 00:36:52.979511] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:26.891 [2024-05-15 00:36:52.979567] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:26.891 [2024-05-15 00:36:52.979578] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:26.891 [2024-05-15 00:36:52.979593] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:26.891 [2024-05-15 00:36:52.979606] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:26.891 [2024-05-15 00:36:52.979912] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:26.891 [2024-05-15 00:36:52.979971] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a3ac80 0 00:21:26.891 [2024-05-15 00:36:52.985946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:26.891 [2024-05-15 00:36:52.985976] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:26.891 [2024-05-15 00:36:52.985991] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:26.891 [2024-05-15 00:36:52.985998] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:26.891 [2024-05-15 00:36:52.986059] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.986074] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.986082] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a3ac80) 00:21:26.891 [2024-05-15 00:36:52.986101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:26.891 [2024-05-15 00:36:52.986128] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a99e40, cid 0, qid 0 00:21:26.891 [2024-05-15 00:36:52.993956] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.891 [2024-05-15 00:36:52.993976] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.891 [2024-05-15 00:36:52.993983] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.993990] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a99e40) on tqpair=0x1a3ac80 00:21:26.891 [2024-05-15 00:36:52.994014] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:26.891 [2024-05-15 00:36:52.994026] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:26.891 [2024-05-15 00:36:52.994036] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:26.891 [2024-05-15 00:36:52.994056] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.994065] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.994071] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a3ac80) 00:21:26.891 [2024-05-15 00:36:52.994082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.891 [2024-05-15 00:36:52.994106] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a99e40, cid 0, qid 0 00:21:26.891 [2024-05-15 00:36:52.994317] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.891 [2024-05-15 00:36:52.994330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.891 [2024-05-15 00:36:52.994337] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.994344] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a99e40) on tqpair=0x1a3ac80 00:21:26.891 [2024-05-15 00:36:52.994355] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:26.891 [2024-05-15 00:36:52.994368] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:26.891 [2024-05-15 00:36:52.994380] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.994388] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.994394] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a3ac80) 00:21:26.891 [2024-05-15 00:36:52.994405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.891 [2024-05-15 00:36:52.994440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a99e40, cid 0, qid 0 00:21:26.891 [2024-05-15 00:36:52.994684] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.891 [2024-05-15 00:36:52.994700] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.891 [2024-05-15 00:36:52.994707] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.994714] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a99e40) on tqpair=0x1a3ac80 00:21:26.891 [2024-05-15 00:36:52.994725] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:26.891 [2024-05-15 00:36:52.994744] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:26.891 [2024-05-15 00:36:52.994757] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.994764] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.994770] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a3ac80) 00:21:26.891 [2024-05-15 00:36:52.994781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.891 [2024-05-15 00:36:52.994817] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a99e40, cid 0, qid 0 00:21:26.891 [2024-05-15 00:36:52.995057] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.891 [2024-05-15 00:36:52.995073] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.891 [2024-05-15 00:36:52.995080] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.995087] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a99e40) on tqpair=0x1a3ac80 00:21:26.891 [2024-05-15 00:36:52.995098] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:26.891 [2024-05-15 00:36:52.995115] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.995124] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.995131] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a3ac80) 00:21:26.891 [2024-05-15 00:36:52.995142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.891 [2024-05-15 00:36:52.995163] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a99e40, cid 0, qid 0 00:21:26.891 [2024-05-15 00:36:52.995354] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.891 [2024-05-15 00:36:52.995367] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.891 [2024-05-15 00:36:52.995374] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.891 [2024-05-15 00:36:52.995380] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a99e40) on tqpair=0x1a3ac80 00:21:26.891 [2024-05-15 00:36:52.995391] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:26.891 [2024-05-15 00:36:52.995399] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:26.891 [2024-05-15 00:36:52.995412] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:26.891 [2024-05-15 00:36:52.995522] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:26.891 [2024-05-15 00:36:52.995531] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:26.892 [2024-05-15 00:36:52.995547] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:52.995555] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:52.995576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a3ac80) 00:21:26.892 [2024-05-15 00:36:52.995586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.892 [2024-05-15 00:36:52.995606] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a99e40, cid 0, qid 0 00:21:26.892 [2024-05-15 00:36:52.995821] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.892 [2024-05-15 00:36:52.995837] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.892 [2024-05-15 00:36:52.995848] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:52.995856] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a99e40) on tqpair=0x1a3ac80 00:21:26.892 [2024-05-15 00:36:52.995866] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:26.892 [2024-05-15 00:36:52.995883] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:52.995892] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:52.995898] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a3ac80) 00:21:26.892 [2024-05-15 00:36:52.995909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.892 [2024-05-15 00:36:52.995936] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a99e40, cid 0, qid 0 00:21:26.892 [2024-05-15 00:36:52.996111] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.892 [2024-05-15 00:36:52.996123] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.892 [2024-05-15 00:36:52.996130] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:52.996137] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a99e40) on tqpair=0x1a3ac80 00:21:26.892 [2024-05-15 00:36:52.996146] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:26.892 [2024-05-15 00:36:52.996155] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:26.892 [2024-05-15 00:36:52.996168] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:26.892 [2024-05-15 00:36:52.996183] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:26.892 [2024-05-15 00:36:52.996199] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:52.996207] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a3ac80) 00:21:26.892 [2024-05-15 00:36:52.996219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.892 [2024-05-15 00:36:52.996255] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a99e40, cid 0, qid 0 00:21:26.892 [2024-05-15 00:36:52.996551] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:26.892 [2024-05-15 00:36:52.996563] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:26.892 [2024-05-15 00:36:52.996571] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:52.996578] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a3ac80): datao=0, datal=4096, cccid=0 00:21:26.892 [2024-05-15 00:36:52.996586] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a99e40) on tqpair(0x1a3ac80): expected_datao=0, payload_size=4096 00:21:26.892 [2024-05-15 00:36:52.996594] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:52.996634] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:52.996645] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.039941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.892 [2024-05-15 00:36:53.039961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.892 [2024-05-15 00:36:53.039968] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.039975] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a99e40) on tqpair=0x1a3ac80 00:21:26.892 [2024-05-15 00:36:53.039990] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:26.892 [2024-05-15 00:36:53.040000] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:26.892 [2024-05-15 00:36:53.040013] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:26.892 [2024-05-15 00:36:53.040022] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:26.892 [2024-05-15 00:36:53.040032] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:26.892 [2024-05-15 00:36:53.040040] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:26.892 [2024-05-15 00:36:53.040061] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:26.892 [2024-05-15 00:36:53.040078] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040086] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040093] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a3ac80) 00:21:26.892 [2024-05-15 00:36:53.040104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:26.892 [2024-05-15 00:36:53.040128] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a99e40, cid 0, qid 0 00:21:26.892 [2024-05-15 00:36:53.040318] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.892 [2024-05-15 00:36:53.040330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.892 [2024-05-15 00:36:53.040337] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040344] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a99e40) on tqpair=0x1a3ac80 00:21:26.892 [2024-05-15 00:36:53.040363] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040372] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040379] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a3ac80) 00:21:26.892 [2024-05-15 00:36:53.040389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.892 [2024-05-15 00:36:53.040400] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040413] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a3ac80) 00:21:26.892 [2024-05-15 00:36:53.040422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.892 [2024-05-15 00:36:53.040432] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040439] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040445] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a3ac80) 00:21:26.892 [2024-05-15 00:36:53.040454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.892 [2024-05-15 00:36:53.040464] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040471] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040477] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:26.892 [2024-05-15 00:36:53.040486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.892 [2024-05-15 00:36:53.040495] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:26.892 [2024-05-15 00:36:53.040510] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:26.892 [2024-05-15 00:36:53.040541] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040549] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a3ac80) 00:21:26.892 [2024-05-15 00:36:53.040560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.892 [2024-05-15 00:36:53.040582] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a99e40, cid 0, qid 0 00:21:26.892 [2024-05-15 00:36:53.040609] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a99fa0, cid 1, qid 0 00:21:26.892 [2024-05-15 00:36:53.040617] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a100, cid 2, qid 0 00:21:26.892 [2024-05-15 00:36:53.040625] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:26.892 [2024-05-15 00:36:53.040632] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a3c0, cid 4, qid 0 00:21:26.892 [2024-05-15 00:36:53.040839] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.892 [2024-05-15 00:36:53.040851] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.892 [2024-05-15 00:36:53.040858] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040865] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a3c0) on tqpair=0x1a3ac80 00:21:26.892 [2024-05-15 00:36:53.040882] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:26.892 [2024-05-15 00:36:53.040893] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:26.892 [2024-05-15 00:36:53.040911] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.892 [2024-05-15 00:36:53.040921] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a3ac80) 00:21:26.892 [2024-05-15 00:36:53.040942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.893 [2024-05-15 00:36:53.040974] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a3c0, cid 4, qid 0 00:21:26.893 [2024-05-15 00:36:53.041173] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:26.893 [2024-05-15 00:36:53.041189] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:26.893 [2024-05-15 00:36:53.041197] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041203] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a3ac80): datao=0, datal=4096, cccid=4 00:21:26.893 [2024-05-15 00:36:53.041211] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9a3c0) on tqpair(0x1a3ac80): expected_datao=0, payload_size=4096 00:21:26.893 [2024-05-15 00:36:53.041219] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041229] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041237] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041289] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.893 [2024-05-15 00:36:53.041300] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.893 [2024-05-15 00:36:53.041307] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041313] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a3c0) on tqpair=0x1a3ac80 00:21:26.893 [2024-05-15 00:36:53.041336] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:26.893 [2024-05-15 00:36:53.041380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041391] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a3ac80) 00:21:26.893 [2024-05-15 00:36:53.041402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.893 [2024-05-15 00:36:53.041418] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041426] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041433] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a3ac80) 00:21:26.893 [2024-05-15 00:36:53.041442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.893 [2024-05-15 00:36:53.041486] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a3c0, cid 4, qid 0 00:21:26.893 [2024-05-15 00:36:53.041498] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a520, cid 5, qid 0 00:21:26.893 [2024-05-15 00:36:53.041772] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:26.893 [2024-05-15 00:36:53.041785] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:26.893 [2024-05-15 00:36:53.041792] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041799] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a3ac80): datao=0, datal=1024, cccid=4 00:21:26.893 [2024-05-15 00:36:53.041806] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9a3c0) on tqpair(0x1a3ac80): expected_datao=0, payload_size=1024 00:21:26.893 [2024-05-15 00:36:53.041814] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041824] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041831] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041854] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:26.893 [2024-05-15 00:36:53.041864] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:26.893 [2024-05-15 00:36:53.041870] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:26.893 [2024-05-15 00:36:53.041877] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a520) on tqpair=0x1a3ac80 00:21:27.154 [2024-05-15 00:36:53.083138] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.154 [2024-05-15 00:36:53.083158] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.154 [2024-05-15 00:36:53.083167] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.083174] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a3c0) on tqpair=0x1a3ac80 00:21:27.154 [2024-05-15 00:36:53.083195] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.083205] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a3ac80) 00:21:27.154 [2024-05-15 00:36:53.083217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.154 [2024-05-15 00:36:53.083248] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a3c0, cid 4, qid 0 00:21:27.154 [2024-05-15 00:36:53.083433] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.154 [2024-05-15 00:36:53.083449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.154 [2024-05-15 00:36:53.083456] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.083462] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a3ac80): datao=0, datal=3072, cccid=4 00:21:27.154 [2024-05-15 00:36:53.083470] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9a3c0) on tqpair(0x1a3ac80): expected_datao=0, payload_size=3072 00:21:27.154 [2024-05-15 00:36:53.083478] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.083488] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.083496] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.083540] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.154 [2024-05-15 00:36:53.083552] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.154 [2024-05-15 00:36:53.083559] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.083570] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a3c0) on tqpair=0x1a3ac80 00:21:27.154 [2024-05-15 00:36:53.083587] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.083596] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a3ac80) 00:21:27.154 [2024-05-15 00:36:53.083607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.154 [2024-05-15 00:36:53.083636] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a3c0, cid 4, qid 0 00:21:27.154 [2024-05-15 00:36:53.083808] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.154 [2024-05-15 00:36:53.083821] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.154 [2024-05-15 00:36:53.083828] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.083834] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a3ac80): datao=0, datal=8, cccid=4 00:21:27.154 [2024-05-15 00:36:53.083842] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a9a3c0) on tqpair(0x1a3ac80): expected_datao=0, payload_size=8 00:21:27.154 [2024-05-15 00:36:53.083850] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.083859] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.083867] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.127945] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.154 [2024-05-15 00:36:53.127964] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.154 [2024-05-15 00:36:53.127986] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.154 [2024-05-15 00:36:53.127994] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a3c0) on tqpair=0x1a3ac80 00:21:27.154 ===================================================== 00:21:27.154 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:27.154 ===================================================== 00:21:27.154 Controller Capabilities/Features 00:21:27.154 ================================ 00:21:27.154 Vendor ID: 0000 00:21:27.154 Subsystem Vendor ID: 0000 00:21:27.154 Serial Number: .................... 00:21:27.154 Model Number: ........................................ 00:21:27.154 Firmware Version: 24.05 00:21:27.154 Recommended Arb Burst: 0 00:21:27.154 IEEE OUI Identifier: 00 00 00 00:21:27.154 Multi-path I/O 00:21:27.154 May have multiple subsystem ports: No 00:21:27.154 May have multiple controllers: No 00:21:27.154 Associated with SR-IOV VF: No 00:21:27.154 Max Data Transfer Size: 131072 00:21:27.154 Max Number of Namespaces: 0 00:21:27.154 Max Number of I/O Queues: 1024 00:21:27.154 NVMe Specification Version (VS): 1.3 00:21:27.154 NVMe Specification Version (Identify): 1.3 00:21:27.154 Maximum Queue Entries: 128 00:21:27.154 Contiguous Queues Required: Yes 00:21:27.154 Arbitration Mechanisms Supported 00:21:27.154 Weighted Round Robin: Not Supported 00:21:27.154 Vendor Specific: Not Supported 00:21:27.154 Reset Timeout: 15000 ms 00:21:27.154 Doorbell Stride: 4 bytes 00:21:27.154 NVM Subsystem Reset: Not Supported 00:21:27.154 Command Sets Supported 00:21:27.154 NVM Command Set: Supported 00:21:27.154 Boot Partition: Not Supported 00:21:27.154 Memory Page Size Minimum: 4096 bytes 00:21:27.154 Memory Page Size Maximum: 4096 bytes 00:21:27.154 Persistent Memory Region: Not Supported 00:21:27.154 Optional Asynchronous Events Supported 00:21:27.154 Namespace Attribute Notices: Not Supported 00:21:27.154 Firmware Activation Notices: Not Supported 00:21:27.154 ANA Change Notices: Not Supported 00:21:27.154 PLE Aggregate Log Change Notices: Not Supported 00:21:27.154 LBA Status Info Alert Notices: Not Supported 00:21:27.154 EGE Aggregate Log Change Notices: Not Supported 00:21:27.154 Normal NVM Subsystem Shutdown event: Not Supported 00:21:27.154 Zone Descriptor Change Notices: Not Supported 00:21:27.154 Discovery Log Change Notices: Supported 00:21:27.154 Controller Attributes 00:21:27.154 128-bit Host Identifier: Not Supported 00:21:27.154 Non-Operational Permissive Mode: Not Supported 00:21:27.154 NVM Sets: Not Supported 00:21:27.154 Read Recovery Levels: Not Supported 00:21:27.154 Endurance Groups: Not Supported 00:21:27.154 Predictable Latency Mode: Not Supported 00:21:27.154 Traffic Based Keep ALive: Not Supported 00:21:27.154 Namespace Granularity: Not Supported 00:21:27.154 SQ Associations: Not Supported 00:21:27.154 UUID List: Not Supported 00:21:27.154 Multi-Domain Subsystem: Not Supported 00:21:27.154 Fixed Capacity Management: Not Supported 00:21:27.155 Variable Capacity Management: Not Supported 00:21:27.155 Delete Endurance Group: Not Supported 00:21:27.155 Delete NVM Set: Not Supported 00:21:27.155 Extended LBA Formats Supported: Not Supported 00:21:27.155 Flexible Data Placement Supported: Not Supported 00:21:27.155 00:21:27.155 Controller Memory Buffer Support 00:21:27.155 ================================ 00:21:27.155 Supported: No 00:21:27.155 00:21:27.155 Persistent Memory Region Support 00:21:27.155 ================================ 00:21:27.155 Supported: No 00:21:27.155 00:21:27.155 Admin Command Set Attributes 00:21:27.155 ============================ 00:21:27.155 Security Send/Receive: Not Supported 00:21:27.155 Format NVM: Not Supported 00:21:27.155 Firmware Activate/Download: Not Supported 00:21:27.155 Namespace Management: Not Supported 00:21:27.155 Device Self-Test: Not Supported 00:21:27.155 Directives: Not Supported 00:21:27.155 NVMe-MI: Not Supported 00:21:27.155 Virtualization Management: Not Supported 00:21:27.155 Doorbell Buffer Config: Not Supported 00:21:27.155 Get LBA Status Capability: Not Supported 00:21:27.155 Command & Feature Lockdown Capability: Not Supported 00:21:27.155 Abort Command Limit: 1 00:21:27.155 Async Event Request Limit: 4 00:21:27.155 Number of Firmware Slots: N/A 00:21:27.155 Firmware Slot 1 Read-Only: N/A 00:21:27.155 Firmware Activation Without Reset: N/A 00:21:27.155 Multiple Update Detection Support: N/A 00:21:27.155 Firmware Update Granularity: No Information Provided 00:21:27.155 Per-Namespace SMART Log: No 00:21:27.155 Asymmetric Namespace Access Log Page: Not Supported 00:21:27.155 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:27.155 Command Effects Log Page: Not Supported 00:21:27.155 Get Log Page Extended Data: Supported 00:21:27.155 Telemetry Log Pages: Not Supported 00:21:27.155 Persistent Event Log Pages: Not Supported 00:21:27.155 Supported Log Pages Log Page: May Support 00:21:27.155 Commands Supported & Effects Log Page: Not Supported 00:21:27.155 Feature Identifiers & Effects Log Page:May Support 00:21:27.155 NVMe-MI Commands & Effects Log Page: May Support 00:21:27.155 Data Area 4 for Telemetry Log: Not Supported 00:21:27.155 Error Log Page Entries Supported: 128 00:21:27.155 Keep Alive: Not Supported 00:21:27.155 00:21:27.155 NVM Command Set Attributes 00:21:27.155 ========================== 00:21:27.155 Submission Queue Entry Size 00:21:27.155 Max: 1 00:21:27.155 Min: 1 00:21:27.155 Completion Queue Entry Size 00:21:27.155 Max: 1 00:21:27.155 Min: 1 00:21:27.155 Number of Namespaces: 0 00:21:27.155 Compare Command: Not Supported 00:21:27.155 Write Uncorrectable Command: Not Supported 00:21:27.155 Dataset Management Command: Not Supported 00:21:27.155 Write Zeroes Command: Not Supported 00:21:27.155 Set Features Save Field: Not Supported 00:21:27.155 Reservations: Not Supported 00:21:27.155 Timestamp: Not Supported 00:21:27.155 Copy: Not Supported 00:21:27.155 Volatile Write Cache: Not Present 00:21:27.155 Atomic Write Unit (Normal): 1 00:21:27.155 Atomic Write Unit (PFail): 1 00:21:27.155 Atomic Compare & Write Unit: 1 00:21:27.155 Fused Compare & Write: Supported 00:21:27.155 Scatter-Gather List 00:21:27.155 SGL Command Set: Supported 00:21:27.155 SGL Keyed: Supported 00:21:27.155 SGL Bit Bucket Descriptor: Not Supported 00:21:27.155 SGL Metadata Pointer: Not Supported 00:21:27.155 Oversized SGL: Not Supported 00:21:27.155 SGL Metadata Address: Not Supported 00:21:27.155 SGL Offset: Supported 00:21:27.155 Transport SGL Data Block: Not Supported 00:21:27.155 Replay Protected Memory Block: Not Supported 00:21:27.155 00:21:27.155 Firmware Slot Information 00:21:27.155 ========================= 00:21:27.155 Active slot: 0 00:21:27.155 00:21:27.155 00:21:27.155 Error Log 00:21:27.155 ========= 00:21:27.155 00:21:27.155 Active Namespaces 00:21:27.155 ================= 00:21:27.155 Discovery Log Page 00:21:27.155 ================== 00:21:27.155 Generation Counter: 2 00:21:27.155 Number of Records: 2 00:21:27.155 Record Format: 0 00:21:27.155 00:21:27.155 Discovery Log Entry 0 00:21:27.155 ---------------------- 00:21:27.155 Transport Type: 3 (TCP) 00:21:27.155 Address Family: 1 (IPv4) 00:21:27.155 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:27.155 Entry Flags: 00:21:27.155 Duplicate Returned Information: 1 00:21:27.155 Explicit Persistent Connection Support for Discovery: 1 00:21:27.155 Transport Requirements: 00:21:27.155 Secure Channel: Not Required 00:21:27.155 Port ID: 0 (0x0000) 00:21:27.155 Controller ID: 65535 (0xffff) 00:21:27.155 Admin Max SQ Size: 128 00:21:27.155 Transport Service Identifier: 4420 00:21:27.155 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:27.155 Transport Address: 10.0.0.2 00:21:27.155 Discovery Log Entry 1 00:21:27.155 ---------------------- 00:21:27.155 Transport Type: 3 (TCP) 00:21:27.155 Address Family: 1 (IPv4) 00:21:27.155 Subsystem Type: 2 (NVM Subsystem) 00:21:27.155 Entry Flags: 00:21:27.155 Duplicate Returned Information: 0 00:21:27.155 Explicit Persistent Connection Support for Discovery: 0 00:21:27.155 Transport Requirements: 00:21:27.155 Secure Channel: Not Required 00:21:27.155 Port ID: 0 (0x0000) 00:21:27.155 Controller ID: 65535 (0xffff) 00:21:27.155 Admin Max SQ Size: 128 00:21:27.155 Transport Service Identifier: 4420 00:21:27.155 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:27.156 Transport Address: 10.0.0.2 [2024-05-15 00:36:53.128118] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:27.156 [2024-05-15 00:36:53.128145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.156 [2024-05-15 00:36:53.128158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.156 [2024-05-15 00:36:53.128168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.156 [2024-05-15 00:36:53.128177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.156 [2024-05-15 00:36:53.128192] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.128201] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.128207] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.156 [2024-05-15 00:36:53.128219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.156 [2024-05-15 00:36:53.128245] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.156 [2024-05-15 00:36:53.128398] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.156 [2024-05-15 00:36:53.128411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.156 [2024-05-15 00:36:53.128418] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.128424] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.156 [2024-05-15 00:36:53.128439] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.128447] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.128453] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.156 [2024-05-15 00:36:53.128468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.156 [2024-05-15 00:36:53.128495] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.156 [2024-05-15 00:36:53.128666] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.156 [2024-05-15 00:36:53.128679] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.156 [2024-05-15 00:36:53.128686] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.128693] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.156 [2024-05-15 00:36:53.128705] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:27.156 [2024-05-15 00:36:53.128714] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:27.156 [2024-05-15 00:36:53.128730] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.128739] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.128745] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.156 [2024-05-15 00:36:53.128756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.156 [2024-05-15 00:36:53.128777] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.156 [2024-05-15 00:36:53.128964] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.156 [2024-05-15 00:36:53.128978] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.156 [2024-05-15 00:36:53.128986] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.128993] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.156 [2024-05-15 00:36:53.129011] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129021] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129027] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.156 [2024-05-15 00:36:53.129038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.156 [2024-05-15 00:36:53.129059] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.156 [2024-05-15 00:36:53.129219] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.156 [2024-05-15 00:36:53.129231] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.156 [2024-05-15 00:36:53.129238] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129245] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.156 [2024-05-15 00:36:53.129262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129271] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129278] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.156 [2024-05-15 00:36:53.129288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.156 [2024-05-15 00:36:53.129308] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.156 [2024-05-15 00:36:53.129458] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.156 [2024-05-15 00:36:53.129470] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.156 [2024-05-15 00:36:53.129477] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129484] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.156 [2024-05-15 00:36:53.129501] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129514] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129521] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.156 [2024-05-15 00:36:53.129532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.156 [2024-05-15 00:36:53.129552] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.156 [2024-05-15 00:36:53.129705] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.156 [2024-05-15 00:36:53.129717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.156 [2024-05-15 00:36:53.129724] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129731] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.156 [2024-05-15 00:36:53.129748] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129764] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.156 [2024-05-15 00:36:53.129774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.156 [2024-05-15 00:36:53.129794] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.156 [2024-05-15 00:36:53.129947] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.156 [2024-05-15 00:36:53.129961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.156 [2024-05-15 00:36:53.129968] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.129974] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.156 [2024-05-15 00:36:53.129992] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.130001] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.156 [2024-05-15 00:36:53.130007] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.156 [2024-05-15 00:36:53.130018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.156 [2024-05-15 00:36:53.130039] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.156 [2024-05-15 00:36:53.130189] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.156 [2024-05-15 00:36:53.130201] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.157 [2024-05-15 00:36:53.130208] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.130215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.157 [2024-05-15 00:36:53.130232] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.130241] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.130248] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.157 [2024-05-15 00:36:53.130258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.157 [2024-05-15 00:36:53.130278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.157 [2024-05-15 00:36:53.130427] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.157 [2024-05-15 00:36:53.130439] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.157 [2024-05-15 00:36:53.130446] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.130453] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.157 [2024-05-15 00:36:53.130470] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.130479] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.130490] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.157 [2024-05-15 00:36:53.130501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.157 [2024-05-15 00:36:53.130521] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.157 [2024-05-15 00:36:53.130671] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.157 [2024-05-15 00:36:53.130686] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.157 [2024-05-15 00:36:53.130693] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.130700] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.157 [2024-05-15 00:36:53.130718] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.130727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.130734] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.157 [2024-05-15 00:36:53.130744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.157 [2024-05-15 00:36:53.130765] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.157 [2024-05-15 00:36:53.130916] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.157 [2024-05-15 00:36:53.130928] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.157 [2024-05-15 00:36:53.130955] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.130970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.157 [2024-05-15 00:36:53.130994] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.131005] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.131012] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.157 [2024-05-15 00:36:53.131023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.157 [2024-05-15 00:36:53.131044] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.157 [2024-05-15 00:36:53.131233] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.157 [2024-05-15 00:36:53.131245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.157 [2024-05-15 00:36:53.131252] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.131259] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.157 [2024-05-15 00:36:53.131276] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.131286] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.131292] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.157 [2024-05-15 00:36:53.131302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.157 [2024-05-15 00:36:53.131323] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.157 [2024-05-15 00:36:53.131481] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.157 [2024-05-15 00:36:53.131496] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.157 [2024-05-15 00:36:53.131503] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.131510] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.157 [2024-05-15 00:36:53.131528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.131537] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.131544] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.157 [2024-05-15 00:36:53.131558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.157 [2024-05-15 00:36:53.131580] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.157 [2024-05-15 00:36:53.131735] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.157 [2024-05-15 00:36:53.131747] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.157 [2024-05-15 00:36:53.131754] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.131761] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.157 [2024-05-15 00:36:53.131778] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.131787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.131794] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.157 [2024-05-15 00:36:53.131804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.157 [2024-05-15 00:36:53.131825] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.157 [2024-05-15 00:36:53.135943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.157 [2024-05-15 00:36:53.135960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.157 [2024-05-15 00:36:53.135967] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.135975] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.157 [2024-05-15 00:36:53.135994] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.136004] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.136010] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a3ac80) 00:21:27.157 [2024-05-15 00:36:53.136021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.157 [2024-05-15 00:36:53.136042] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a9a260, cid 3, qid 0 00:21:27.157 [2024-05-15 00:36:53.136236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.157 [2024-05-15 00:36:53.136249] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.157 [2024-05-15 00:36:53.136256] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.157 [2024-05-15 00:36:53.136263] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a9a260) on tqpair=0x1a3ac80 00:21:27.157 [2024-05-15 00:36:53.136277] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:21:27.157 00:21:27.157 00:36:53 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:27.157 [2024-05-15 00:36:53.170438] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:21:27.158 [2024-05-15 00:36:53.170481] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938239 ] 00:21:27.158 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.158 [2024-05-15 00:36:53.205711] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:27.158 [2024-05-15 00:36:53.205756] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:27.158 [2024-05-15 00:36:53.205769] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:27.158 [2024-05-15 00:36:53.205783] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:27.158 [2024-05-15 00:36:53.205795] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:27.158 [2024-05-15 00:36:53.206137] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:27.158 [2024-05-15 00:36:53.206174] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa4ac80 0 00:21:27.158 [2024-05-15 00:36:53.219940] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:27.158 [2024-05-15 00:36:53.219966] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:27.158 [2024-05-15 00:36:53.219975] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:27.158 [2024-05-15 00:36:53.219981] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:27.158 [2024-05-15 00:36:53.220038] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.220051] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.220057] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4ac80) 00:21:27.158 [2024-05-15 00:36:53.220072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:27.158 [2024-05-15 00:36:53.220098] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa9e40, cid 0, qid 0 00:21:27.158 [2024-05-15 00:36:53.227949] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.158 [2024-05-15 00:36:53.227967] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.158 [2024-05-15 00:36:53.227974] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.227981] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaa9e40) on tqpair=0xa4ac80 00:21:27.158 [2024-05-15 00:36:53.227998] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:27.158 [2024-05-15 00:36:53.228024] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:27.158 [2024-05-15 00:36:53.228034] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:27.158 [2024-05-15 00:36:53.228050] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.228059] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.228066] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4ac80) 00:21:27.158 [2024-05-15 00:36:53.228077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.158 [2024-05-15 00:36:53.228101] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa9e40, cid 0, qid 0 00:21:27.158 [2024-05-15 00:36:53.228296] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.158 [2024-05-15 00:36:53.228308] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.158 [2024-05-15 00:36:53.228315] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.228322] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaa9e40) on tqpair=0xa4ac80 00:21:27.158 [2024-05-15 00:36:53.228330] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:27.158 [2024-05-15 00:36:53.228342] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:27.158 [2024-05-15 00:36:53.228354] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.228362] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.228368] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4ac80) 00:21:27.158 [2024-05-15 00:36:53.228379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.158 [2024-05-15 00:36:53.228405] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa9e40, cid 0, qid 0 00:21:27.158 [2024-05-15 00:36:53.228589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.158 [2024-05-15 00:36:53.228605] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.158 [2024-05-15 00:36:53.228611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.228618] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaa9e40) on tqpair=0xa4ac80 00:21:27.158 [2024-05-15 00:36:53.228626] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:27.158 [2024-05-15 00:36:53.228640] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:27.158 [2024-05-15 00:36:53.228652] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.228660] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.228666] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4ac80) 00:21:27.158 [2024-05-15 00:36:53.228676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.158 [2024-05-15 00:36:53.228697] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa9e40, cid 0, qid 0 00:21:27.158 [2024-05-15 00:36:53.228884] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.158 [2024-05-15 00:36:53.228896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.158 [2024-05-15 00:36:53.228903] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.228909] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaa9e40) on tqpair=0xa4ac80 00:21:27.158 [2024-05-15 00:36:53.228918] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:27.158 [2024-05-15 00:36:53.228942] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.228958] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.228966] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4ac80) 00:21:27.158 [2024-05-15 00:36:53.228977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.158 [2024-05-15 00:36:53.229000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa9e40, cid 0, qid 0 00:21:27.158 [2024-05-15 00:36:53.229192] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.158 [2024-05-15 00:36:53.229205] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.158 [2024-05-15 00:36:53.229211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.158 [2024-05-15 00:36:53.229218] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaa9e40) on tqpair=0xa4ac80 00:21:27.158 [2024-05-15 00:36:53.229226] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:27.158 [2024-05-15 00:36:53.229234] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:27.158 [2024-05-15 00:36:53.229247] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:27.158 [2024-05-15 00:36:53.229356] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:27.158 [2024-05-15 00:36:53.229363] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:27.159 [2024-05-15 00:36:53.229375] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.229383] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.229393] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4ac80) 00:21:27.159 [2024-05-15 00:36:53.229419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.159 [2024-05-15 00:36:53.229440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa9e40, cid 0, qid 0 00:21:27.159 [2024-05-15 00:36:53.229656] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.159 [2024-05-15 00:36:53.229672] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.159 [2024-05-15 00:36:53.229679] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.229686] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaa9e40) on tqpair=0xa4ac80 00:21:27.159 [2024-05-15 00:36:53.229694] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:27.159 [2024-05-15 00:36:53.229711] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.229720] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.229726] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4ac80) 00:21:27.159 [2024-05-15 00:36:53.229737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.159 [2024-05-15 00:36:53.229758] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa9e40, cid 0, qid 0 00:21:27.159 [2024-05-15 00:36:53.229955] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.159 [2024-05-15 00:36:53.229971] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.159 [2024-05-15 00:36:53.229978] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.229985] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaa9e40) on tqpair=0xa4ac80 00:21:27.159 [2024-05-15 00:36:53.229992] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:27.159 [2024-05-15 00:36:53.230000] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:27.159 [2024-05-15 00:36:53.230014] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:27.159 [2024-05-15 00:36:53.230028] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:27.159 [2024-05-15 00:36:53.230043] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230050] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4ac80) 00:21:27.159 [2024-05-15 00:36:53.230061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.159 [2024-05-15 00:36:53.230083] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa9e40, cid 0, qid 0 00:21:27.159 [2024-05-15 00:36:53.230298] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.159 [2024-05-15 00:36:53.230311] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.159 [2024-05-15 00:36:53.230318] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230324] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4ac80): datao=0, datal=4096, cccid=0 00:21:27.159 [2024-05-15 00:36:53.230332] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa9e40) on tqpair(0xa4ac80): expected_datao=0, payload_size=4096 00:21:27.159 [2024-05-15 00:36:53.230339] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230384] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230398] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230532] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.159 [2024-05-15 00:36:53.230547] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.159 [2024-05-15 00:36:53.230555] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230561] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaa9e40) on tqpair=0xa4ac80 00:21:27.159 [2024-05-15 00:36:53.230572] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:27.159 [2024-05-15 00:36:53.230580] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:27.159 [2024-05-15 00:36:53.230588] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:27.159 [2024-05-15 00:36:53.230594] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:27.159 [2024-05-15 00:36:53.230602] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:27.159 [2024-05-15 00:36:53.230609] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:27.159 [2024-05-15 00:36:53.230628] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:27.159 [2024-05-15 00:36:53.230643] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230651] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230658] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4ac80) 00:21:27.159 [2024-05-15 00:36:53.230668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:27.159 [2024-05-15 00:36:53.230690] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa9e40, cid 0, qid 0 00:21:27.159 [2024-05-15 00:36:53.230882] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.159 [2024-05-15 00:36:53.230895] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.159 [2024-05-15 00:36:53.230901] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230908] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaa9e40) on tqpair=0xa4ac80 00:21:27.159 [2024-05-15 00:36:53.230923] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230940] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230947] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4ac80) 00:21:27.159 [2024-05-15 00:36:53.230957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.159 [2024-05-15 00:36:53.230967] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230974] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.230981] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa4ac80) 00:21:27.159 [2024-05-15 00:36:53.230989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.159 [2024-05-15 00:36:53.230999] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.231005] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.231012] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa4ac80) 00:21:27.159 [2024-05-15 00:36:53.231020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.159 [2024-05-15 00:36:53.231030] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.231036] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.159 [2024-05-15 00:36:53.231042] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.159 [2024-05-15 00:36:53.231055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.159 [2024-05-15 00:36:53.231064] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:27.160 [2024-05-15 00:36:53.231079] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:27.160 [2024-05-15 00:36:53.231090] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.160 [2024-05-15 00:36:53.231097] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4ac80) 00:21:27.160 [2024-05-15 00:36:53.231107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.160 [2024-05-15 00:36:53.231130] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa9e40, cid 0, qid 0 00:21:27.160 [2024-05-15 00:36:53.231141] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa9fa0, cid 1, qid 0 00:21:27.160 [2024-05-15 00:36:53.231149] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa100, cid 2, qid 0 00:21:27.160 [2024-05-15 00:36:53.231157] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.160 [2024-05-15 00:36:53.231164] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa3c0, cid 4, qid 0 00:21:27.160 [2024-05-15 00:36:53.231379] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.160 [2024-05-15 00:36:53.231394] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.160 [2024-05-15 00:36:53.231401] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.160 [2024-05-15 00:36:53.231407] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa3c0) on tqpair=0xa4ac80 00:21:27.160 [2024-05-15 00:36:53.231420] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:27.160 [2024-05-15 00:36:53.231430] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:27.160 [2024-05-15 00:36:53.231445] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:27.160 [2024-05-15 00:36:53.231457] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:27.160 [2024-05-15 00:36:53.231468] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.160 [2024-05-15 00:36:53.231476] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.160 [2024-05-15 00:36:53.231482] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4ac80) 00:21:27.160 [2024-05-15 00:36:53.231493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:27.160 [2024-05-15 00:36:53.231514] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa3c0, cid 4, qid 0 00:21:27.160 [2024-05-15 00:36:53.231716] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.160 [2024-05-15 00:36:53.231731] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.160 [2024-05-15 00:36:53.231738] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.160 [2024-05-15 00:36:53.231744] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa3c0) on tqpair=0xa4ac80 00:21:27.160 [2024-05-15 00:36:53.231802] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:27.160 [2024-05-15 00:36:53.231824] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:27.160 [2024-05-15 00:36:53.231839] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.160 [2024-05-15 00:36:53.231847] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4ac80) 00:21:27.160 [2024-05-15 00:36:53.231861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.160 [2024-05-15 00:36:53.231883] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa3c0, cid 4, qid 0 00:21:27.160 [2024-05-15 00:36:53.235945] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.160 [2024-05-15 00:36:53.235962] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.160 [2024-05-15 00:36:53.235969] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.160 [2024-05-15 00:36:53.235975] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4ac80): datao=0, datal=4096, cccid=4 00:21:27.160 [2024-05-15 00:36:53.235983] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaaa3c0) on tqpair(0xa4ac80): expected_datao=0, payload_size=4096 00:21:27.160 [2024-05-15 00:36:53.235990] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.160 [2024-05-15 00:36:53.236000] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.160 [2024-05-15 00:36:53.236007] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.160 [2024-05-15 00:36:53.275949] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.160 [2024-05-15 00:36:53.275969] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.160 [2024-05-15 00:36:53.275977] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.160 [2024-05-15 00:36:53.275984] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa3c0) on tqpair=0xa4ac80 00:21:27.160 [2024-05-15 00:36:53.276012] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:27.160 [2024-05-15 00:36:53.276032] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:27.160 [2024-05-15 00:36:53.276052] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:27.161 [2024-05-15 00:36:53.276066] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.161 [2024-05-15 00:36:53.276074] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4ac80) 00:21:27.161 [2024-05-15 00:36:53.276085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.161 [2024-05-15 00:36:53.276109] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa3c0, cid 4, qid 0 00:21:27.161 [2024-05-15 00:36:53.276315] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.161 [2024-05-15 00:36:53.276328] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.161 [2024-05-15 00:36:53.276335] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.161 [2024-05-15 00:36:53.276341] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4ac80): datao=0, datal=4096, cccid=4 00:21:27.161 [2024-05-15 00:36:53.276349] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaaa3c0) on tqpair(0xa4ac80): expected_datao=0, payload_size=4096 00:21:27.161 [2024-05-15 00:36:53.276356] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.161 [2024-05-15 00:36:53.276394] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.161 [2024-05-15 00:36:53.276403] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.437 [2024-05-15 00:36:53.317103] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.437 [2024-05-15 00:36:53.317124] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.437 [2024-05-15 00:36:53.317132] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.437 [2024-05-15 00:36:53.317139] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa3c0) on tqpair=0xa4ac80 00:21:27.437 [2024-05-15 00:36:53.317158] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:27.437 [2024-05-15 00:36:53.317193] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:27.437 [2024-05-15 00:36:53.317209] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.437 [2024-05-15 00:36:53.317217] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4ac80) 00:21:27.437 [2024-05-15 00:36:53.317229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.437 [2024-05-15 00:36:53.317252] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa3c0, cid 4, qid 0 00:21:27.437 [2024-05-15 00:36:53.317431] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.437 [2024-05-15 00:36:53.317444] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.437 [2024-05-15 00:36:53.317451] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.437 [2024-05-15 00:36:53.317458] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4ac80): datao=0, datal=4096, cccid=4 00:21:27.437 [2024-05-15 00:36:53.317465] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaaa3c0) on tqpair(0xa4ac80): expected_datao=0, payload_size=4096 00:21:27.437 [2024-05-15 00:36:53.317473] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.437 [2024-05-15 00:36:53.317521] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.437 [2024-05-15 00:36:53.317531] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.437 [2024-05-15 00:36:53.358095] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.437 [2024-05-15 00:36:53.358116] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.437 [2024-05-15 00:36:53.358124] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.437 [2024-05-15 00:36:53.358131] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa3c0) on tqpair=0xa4ac80 00:21:27.437 [2024-05-15 00:36:53.358152] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:27.437 [2024-05-15 00:36:53.358169] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:27.438 [2024-05-15 00:36:53.358184] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:27.438 [2024-05-15 00:36:53.358195] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:27.438 [2024-05-15 00:36:53.358204] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:27.438 [2024-05-15 00:36:53.358213] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:27.438 [2024-05-15 00:36:53.358221] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:27.438 [2024-05-15 00:36:53.358229] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:27.438 [2024-05-15 00:36:53.358252] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.438 [2024-05-15 00:36:53.358261] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4ac80) 00:21:27.438 [2024-05-15 00:36:53.358273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.438 [2024-05-15 00:36:53.358284] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.438 [2024-05-15 00:36:53.358292] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.438 [2024-05-15 00:36:53.358298] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa4ac80) 00:21:27.438 [2024-05-15 00:36:53.358307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.438 [2024-05-15 00:36:53.358337] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa3c0, cid 4, qid 0 00:21:27.438 [2024-05-15 00:36:53.358350] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa520, cid 5, qid 0 00:21:27.438 [2024-05-15 00:36:53.358511] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.438 [2024-05-15 00:36:53.358526] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.438 [2024-05-15 00:36:53.358533] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.438 [2024-05-15 00:36:53.358540] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa3c0) on tqpair=0xa4ac80 00:21:27.438 [2024-05-15 00:36:53.358551] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.438 [2024-05-15 00:36:53.358560] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.438 [2024-05-15 00:36:53.358567] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.438 [2024-05-15 00:36:53.358573] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa520) on tqpair=0xa4ac80 00:21:27.438 [2024-05-15 00:36:53.358589] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.438 [2024-05-15 00:36:53.358598] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa4ac80) 00:21:27.438 [2024-05-15 00:36:53.358609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.438 [2024-05-15 00:36:53.358630] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa520, cid 5, qid 0 00:21:27.438 [2024-05-15 00:36:53.358823] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.438 [2024-05-15 00:36:53.358836] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.438 [2024-05-15 00:36:53.358842] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.438 [2024-05-15 00:36:53.358849] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa520) on tqpair=0xa4ac80 00:21:27.438 [2024-05-15 00:36:53.358865] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.438 [2024-05-15 00:36:53.358874] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa4ac80) 00:21:27.438 [2024-05-15 00:36:53.358884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.438 [2024-05-15 00:36:53.358904] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa520, cid 5, qid 0 00:21:27.438 [2024-05-15 00:36:53.359065] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.438 [2024-05-15 00:36:53.359079] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.438 [2024-05-15 00:36:53.359086] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.438 [2024-05-15 00:36:53.359092] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa520) on tqpair=0xa4ac80 00:21:27.438 [2024-05-15 00:36:53.359108] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.438 [2024-05-15 00:36:53.359117] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa4ac80) 00:21:27.438 [2024-05-15 00:36:53.359127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.439 [2024-05-15 00:36:53.359148] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa520, cid 5, qid 0 00:21:27.439 [2024-05-15 00:36:53.359298] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.439 [2024-05-15 00:36:53.359310] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.439 [2024-05-15 00:36:53.359317] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.359324] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa520) on tqpair=0xa4ac80 00:21:27.439 [2024-05-15 00:36:53.359343] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.359353] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa4ac80) 00:21:27.439 [2024-05-15 00:36:53.359367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.439 [2024-05-15 00:36:53.359381] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.359388] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4ac80) 00:21:27.439 [2024-05-15 00:36:53.359398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.439 [2024-05-15 00:36:53.359409] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.359416] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa4ac80) 00:21:27.439 [2024-05-15 00:36:53.359426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.439 [2024-05-15 00:36:53.359442] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.359450] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa4ac80) 00:21:27.439 [2024-05-15 00:36:53.359460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.439 [2024-05-15 00:36:53.359482] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa520, cid 5, qid 0 00:21:27.439 [2024-05-15 00:36:53.359493] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa3c0, cid 4, qid 0 00:21:27.439 [2024-05-15 00:36:53.359501] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa680, cid 6, qid 0 00:21:27.439 [2024-05-15 00:36:53.359509] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa7e0, cid 7, qid 0 00:21:27.439 [2024-05-15 00:36:53.359768] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.439 [2024-05-15 00:36:53.359781] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.439 [2024-05-15 00:36:53.359788] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.359794] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4ac80): datao=0, datal=8192, cccid=5 00:21:27.439 [2024-05-15 00:36:53.359802] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaaa520) on tqpair(0xa4ac80): expected_datao=0, payload_size=8192 00:21:27.439 [2024-05-15 00:36:53.359809] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.359859] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.359870] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.359879] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.439 [2024-05-15 00:36:53.359888] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.439 [2024-05-15 00:36:53.359894] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.359901] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4ac80): datao=0, datal=512, cccid=4 00:21:27.439 [2024-05-15 00:36:53.359908] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaaa3c0) on tqpair(0xa4ac80): expected_datao=0, payload_size=512 00:21:27.439 [2024-05-15 00:36:53.359915] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.359924] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.363943] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.363958] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.439 [2024-05-15 00:36:53.363968] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.439 [2024-05-15 00:36:53.363974] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.363981] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4ac80): datao=0, datal=512, cccid=6 00:21:27.439 [2024-05-15 00:36:53.363992] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaaa680) on tqpair(0xa4ac80): expected_datao=0, payload_size=512 00:21:27.439 [2024-05-15 00:36:53.364001] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.364010] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.364017] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.364025] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:27.439 [2024-05-15 00:36:53.364034] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:27.439 [2024-05-15 00:36:53.364041] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.364047] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4ac80): datao=0, datal=4096, cccid=7 00:21:27.439 [2024-05-15 00:36:53.364054] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaaa7e0) on tqpair(0xa4ac80): expected_datao=0, payload_size=4096 00:21:27.439 [2024-05-15 00:36:53.364061] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.364071] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.364078] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.364090] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.439 [2024-05-15 00:36:53.364100] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.439 [2024-05-15 00:36:53.364106] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.364113] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa520) on tqpair=0xa4ac80 00:21:27.439 [2024-05-15 00:36:53.364133] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.439 [2024-05-15 00:36:53.364144] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.439 [2024-05-15 00:36:53.364151] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.364157] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa3c0) on tqpair=0xa4ac80 00:21:27.439 [2024-05-15 00:36:53.364171] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.439 [2024-05-15 00:36:53.364181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.439 [2024-05-15 00:36:53.364188] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.364194] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa680) on tqpair=0xa4ac80 00:21:27.439 [2024-05-15 00:36:53.364224] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.439 [2024-05-15 00:36:53.364234] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.439 [2024-05-15 00:36:53.364240] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.439 [2024-05-15 00:36:53.364247] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa7e0) on tqpair=0xa4ac80 00:21:27.439 ===================================================== 00:21:27.439 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.439 ===================================================== 00:21:27.439 Controller Capabilities/Features 00:21:27.439 ================================ 00:21:27.439 Vendor ID: 8086 00:21:27.439 Subsystem Vendor ID: 8086 00:21:27.439 Serial Number: SPDK00000000000001 00:21:27.439 Model Number: SPDK bdev Controller 00:21:27.439 Firmware Version: 24.05 00:21:27.439 Recommended Arb Burst: 6 00:21:27.439 IEEE OUI Identifier: e4 d2 5c 00:21:27.439 Multi-path I/O 00:21:27.439 May have multiple subsystem ports: Yes 00:21:27.439 May have multiple controllers: Yes 00:21:27.439 Associated with SR-IOV VF: No 00:21:27.439 Max Data Transfer Size: 131072 00:21:27.439 Max Number of Namespaces: 32 00:21:27.439 Max Number of I/O Queues: 127 00:21:27.439 NVMe Specification Version (VS): 1.3 00:21:27.439 NVMe Specification Version (Identify): 1.3 00:21:27.439 Maximum Queue Entries: 128 00:21:27.439 Contiguous Queues Required: Yes 00:21:27.439 Arbitration Mechanisms Supported 00:21:27.439 Weighted Round Robin: Not Supported 00:21:27.439 Vendor Specific: Not Supported 00:21:27.439 Reset Timeout: 15000 ms 00:21:27.439 Doorbell Stride: 4 bytes 00:21:27.439 NVM Subsystem Reset: Not Supported 00:21:27.439 Command Sets Supported 00:21:27.439 NVM Command Set: Supported 00:21:27.439 Boot Partition: Not Supported 00:21:27.439 Memory Page Size Minimum: 4096 bytes 00:21:27.439 Memory Page Size Maximum: 4096 bytes 00:21:27.439 Persistent Memory Region: Not Supported 00:21:27.439 Optional Asynchronous Events Supported 00:21:27.439 Namespace Attribute Notices: Supported 00:21:27.439 Firmware Activation Notices: Not Supported 00:21:27.439 ANA Change Notices: Not Supported 00:21:27.439 PLE Aggregate Log Change Notices: Not Supported 00:21:27.439 LBA Status Info Alert Notices: Not Supported 00:21:27.439 EGE Aggregate Log Change Notices: Not Supported 00:21:27.439 Normal NVM Subsystem Shutdown event: Not Supported 00:21:27.439 Zone Descriptor Change Notices: Not Supported 00:21:27.440 Discovery Log Change Notices: Not Supported 00:21:27.440 Controller Attributes 00:21:27.440 128-bit Host Identifier: Supported 00:21:27.440 Non-Operational Permissive Mode: Not Supported 00:21:27.440 NVM Sets: Not Supported 00:21:27.440 Read Recovery Levels: Not Supported 00:21:27.440 Endurance Groups: Not Supported 00:21:27.440 Predictable Latency Mode: Not Supported 00:21:27.440 Traffic Based Keep ALive: Not Supported 00:21:27.440 Namespace Granularity: Not Supported 00:21:27.440 SQ Associations: Not Supported 00:21:27.440 UUID List: Not Supported 00:21:27.440 Multi-Domain Subsystem: Not Supported 00:21:27.440 Fixed Capacity Management: Not Supported 00:21:27.440 Variable Capacity Management: Not Supported 00:21:27.440 Delete Endurance Group: Not Supported 00:21:27.440 Delete NVM Set: Not Supported 00:21:27.440 Extended LBA Formats Supported: Not Supported 00:21:27.440 Flexible Data Placement Supported: Not Supported 00:21:27.440 00:21:27.440 Controller Memory Buffer Support 00:21:27.440 ================================ 00:21:27.440 Supported: No 00:21:27.440 00:21:27.440 Persistent Memory Region Support 00:21:27.440 ================================ 00:21:27.440 Supported: No 00:21:27.440 00:21:27.440 Admin Command Set Attributes 00:21:27.440 ============================ 00:21:27.440 Security Send/Receive: Not Supported 00:21:27.440 Format NVM: Not Supported 00:21:27.440 Firmware Activate/Download: Not Supported 00:21:27.440 Namespace Management: Not Supported 00:21:27.440 Device Self-Test: Not Supported 00:21:27.440 Directives: Not Supported 00:21:27.440 NVMe-MI: Not Supported 00:21:27.440 Virtualization Management: Not Supported 00:21:27.440 Doorbell Buffer Config: Not Supported 00:21:27.440 Get LBA Status Capability: Not Supported 00:21:27.440 Command & Feature Lockdown Capability: Not Supported 00:21:27.440 Abort Command Limit: 4 00:21:27.440 Async Event Request Limit: 4 00:21:27.440 Number of Firmware Slots: N/A 00:21:27.440 Firmware Slot 1 Read-Only: N/A 00:21:27.440 Firmware Activation Without Reset: N/A 00:21:27.440 Multiple Update Detection Support: N/A 00:21:27.440 Firmware Update Granularity: No Information Provided 00:21:27.440 Per-Namespace SMART Log: No 00:21:27.440 Asymmetric Namespace Access Log Page: Not Supported 00:21:27.440 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:27.440 Command Effects Log Page: Supported 00:21:27.440 Get Log Page Extended Data: Supported 00:21:27.440 Telemetry Log Pages: Not Supported 00:21:27.440 Persistent Event Log Pages: Not Supported 00:21:27.440 Supported Log Pages Log Page: May Support 00:21:27.440 Commands Supported & Effects Log Page: Not Supported 00:21:27.440 Feature Identifiers & Effects Log Page:May Support 00:21:27.440 NVMe-MI Commands & Effects Log Page: May Support 00:21:27.440 Data Area 4 for Telemetry Log: Not Supported 00:21:27.440 Error Log Page Entries Supported: 128 00:21:27.440 Keep Alive: Supported 00:21:27.440 Keep Alive Granularity: 10000 ms 00:21:27.440 00:21:27.440 NVM Command Set Attributes 00:21:27.440 ========================== 00:21:27.440 Submission Queue Entry Size 00:21:27.440 Max: 64 00:21:27.440 Min: 64 00:21:27.440 Completion Queue Entry Size 00:21:27.440 Max: 16 00:21:27.440 Min: 16 00:21:27.440 Number of Namespaces: 32 00:21:27.440 Compare Command: Supported 00:21:27.440 Write Uncorrectable Command: Not Supported 00:21:27.440 Dataset Management Command: Supported 00:21:27.440 Write Zeroes Command: Supported 00:21:27.440 Set Features Save Field: Not Supported 00:21:27.440 Reservations: Supported 00:21:27.440 Timestamp: Not Supported 00:21:27.440 Copy: Supported 00:21:27.440 Volatile Write Cache: Present 00:21:27.440 Atomic Write Unit (Normal): 1 00:21:27.440 Atomic Write Unit (PFail): 1 00:21:27.440 Atomic Compare & Write Unit: 1 00:21:27.440 Fused Compare & Write: Supported 00:21:27.440 Scatter-Gather List 00:21:27.440 SGL Command Set: Supported 00:21:27.440 SGL Keyed: Supported 00:21:27.440 SGL Bit Bucket Descriptor: Not Supported 00:21:27.440 SGL Metadata Pointer: Not Supported 00:21:27.440 Oversized SGL: Not Supported 00:21:27.440 SGL Metadata Address: Not Supported 00:21:27.440 SGL Offset: Supported 00:21:27.440 Transport SGL Data Block: Not Supported 00:21:27.440 Replay Protected Memory Block: Not Supported 00:21:27.440 00:21:27.440 Firmware Slot Information 00:21:27.440 ========================= 00:21:27.440 Active slot: 1 00:21:27.440 Slot 1 Firmware Revision: 24.05 00:21:27.440 00:21:27.440 00:21:27.440 Commands Supported and Effects 00:21:27.440 ============================== 00:21:27.440 Admin Commands 00:21:27.440 -------------- 00:21:27.440 Get Log Page (02h): Supported 00:21:27.440 Identify (06h): Supported 00:21:27.440 Abort (08h): Supported 00:21:27.440 Set Features (09h): Supported 00:21:27.440 Get Features (0Ah): Supported 00:21:27.440 Asynchronous Event Request (0Ch): Supported 00:21:27.440 Keep Alive (18h): Supported 00:21:27.440 I/O Commands 00:21:27.440 ------------ 00:21:27.440 Flush (00h): Supported LBA-Change 00:21:27.440 Write (01h): Supported LBA-Change 00:21:27.440 Read (02h): Supported 00:21:27.440 Compare (05h): Supported 00:21:27.440 Write Zeroes (08h): Supported LBA-Change 00:21:27.440 Dataset Management (09h): Supported LBA-Change 00:21:27.440 Copy (19h): Supported LBA-Change 00:21:27.440 Unknown (79h): Supported LBA-Change 00:21:27.440 Unknown (7Ah): Supported 00:21:27.440 00:21:27.440 Error Log 00:21:27.440 ========= 00:21:27.440 00:21:27.440 Arbitration 00:21:27.440 =========== 00:21:27.440 Arbitration Burst: 1 00:21:27.440 00:21:27.440 Power Management 00:21:27.440 ================ 00:21:27.440 Number of Power States: 1 00:21:27.440 Current Power State: Power State #0 00:21:27.440 Power State #0: 00:21:27.440 Max Power: 0.00 W 00:21:27.440 Non-Operational State: Operational 00:21:27.440 Entry Latency: Not Reported 00:21:27.440 Exit Latency: Not Reported 00:21:27.440 Relative Read Throughput: 0 00:21:27.440 Relative Read Latency: 0 00:21:27.440 Relative Write Throughput: 0 00:21:27.440 Relative Write Latency: 0 00:21:27.440 Idle Power: Not Reported 00:21:27.440 Active Power: Not Reported 00:21:27.440 Non-Operational Permissive Mode: Not Supported 00:21:27.440 00:21:27.440 Health Information 00:21:27.440 ================== 00:21:27.440 Critical Warnings: 00:21:27.440 Available Spare Space: OK 00:21:27.440 Temperature: OK 00:21:27.440 Device Reliability: OK 00:21:27.440 Read Only: No 00:21:27.440 Volatile Memory Backup: OK 00:21:27.440 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:27.440 Temperature Threshold: [2024-05-15 00:36:53.364390] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.440 [2024-05-15 00:36:53.364402] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa4ac80) 00:21:27.440 [2024-05-15 00:36:53.364413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.440 [2024-05-15 00:36:53.364436] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa7e0, cid 7, qid 0 00:21:27.440 [2024-05-15 00:36:53.364647] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.440 [2024-05-15 00:36:53.364660] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.440 [2024-05-15 00:36:53.364667] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.440 [2024-05-15 00:36:53.364674] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa7e0) on tqpair=0xa4ac80 00:21:27.440 [2024-05-15 00:36:53.364715] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:27.440 [2024-05-15 00:36:53.364736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.441 [2024-05-15 00:36:53.364752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.441 [2024-05-15 00:36:53.364762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.441 [2024-05-15 00:36:53.364771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.441 [2024-05-15 00:36:53.364784] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.364792] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.364798] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.441 [2024-05-15 00:36:53.364808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.441 [2024-05-15 00:36:53.364831] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.441 [2024-05-15 00:36:53.365034] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.441 [2024-05-15 00:36:53.365051] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.441 [2024-05-15 00:36:53.365058] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365064] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.441 [2024-05-15 00:36:53.365076] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365084] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365090] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.441 [2024-05-15 00:36:53.365100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.441 [2024-05-15 00:36:53.365127] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.441 [2024-05-15 00:36:53.365336] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.441 [2024-05-15 00:36:53.365351] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.441 [2024-05-15 00:36:53.365358] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365365] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.441 [2024-05-15 00:36:53.365373] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:27.441 [2024-05-15 00:36:53.365381] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:27.441 [2024-05-15 00:36:53.365397] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365405] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365412] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.441 [2024-05-15 00:36:53.365422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.441 [2024-05-15 00:36:53.365442] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.441 [2024-05-15 00:36:53.365623] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.441 [2024-05-15 00:36:53.365635] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.441 [2024-05-15 00:36:53.365642] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365649] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.441 [2024-05-15 00:36:53.365665] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365674] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365680] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.441 [2024-05-15 00:36:53.365694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.441 [2024-05-15 00:36:53.365715] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.441 [2024-05-15 00:36:53.365866] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.441 [2024-05-15 00:36:53.365881] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.441 [2024-05-15 00:36:53.365888] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365895] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.441 [2024-05-15 00:36:53.365911] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365920] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.365927] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.441 [2024-05-15 00:36:53.365944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.441 [2024-05-15 00:36:53.365966] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.441 [2024-05-15 00:36:53.366118] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.441 [2024-05-15 00:36:53.366134] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.441 [2024-05-15 00:36:53.366140] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366147] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.441 [2024-05-15 00:36:53.366163] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366172] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366179] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.441 [2024-05-15 00:36:53.366189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.441 [2024-05-15 00:36:53.366209] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.441 [2024-05-15 00:36:53.366361] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.441 [2024-05-15 00:36:53.366376] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.441 [2024-05-15 00:36:53.366383] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366390] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.441 [2024-05-15 00:36:53.366406] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366415] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366422] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.441 [2024-05-15 00:36:53.366432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.441 [2024-05-15 00:36:53.366452] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.441 [2024-05-15 00:36:53.366603] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.441 [2024-05-15 00:36:53.366618] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.441 [2024-05-15 00:36:53.366625] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366632] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.441 [2024-05-15 00:36:53.366648] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366657] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366663] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.441 [2024-05-15 00:36:53.366674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.441 [2024-05-15 00:36:53.366698] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.441 [2024-05-15 00:36:53.366853] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.441 [2024-05-15 00:36:53.366866] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.441 [2024-05-15 00:36:53.366872] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366879] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.441 [2024-05-15 00:36:53.366895] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366904] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.366910] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.441 [2024-05-15 00:36:53.366921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.441 [2024-05-15 00:36:53.366948] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.441 [2024-05-15 00:36:53.367108] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.441 [2024-05-15 00:36:53.367120] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.441 [2024-05-15 00:36:53.367127] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.367134] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.441 [2024-05-15 00:36:53.367149] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.367158] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.367165] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.441 [2024-05-15 00:36:53.367175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.441 [2024-05-15 00:36:53.367195] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.441 [2024-05-15 00:36:53.367354] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.441 [2024-05-15 00:36:53.367366] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.441 [2024-05-15 00:36:53.367373] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.367380] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.441 [2024-05-15 00:36:53.367396] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.367404] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.441 [2024-05-15 00:36:53.367411] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.441 [2024-05-15 00:36:53.367421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.442 [2024-05-15 00:36:53.367440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.442 [2024-05-15 00:36:53.367590] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.442 [2024-05-15 00:36:53.367602] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.442 [2024-05-15 00:36:53.367609] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.442 [2024-05-15 00:36:53.367615] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.442 [2024-05-15 00:36:53.367631] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.442 [2024-05-15 00:36:53.367640] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.442 [2024-05-15 00:36:53.367646] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.442 [2024-05-15 00:36:53.367657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.442 [2024-05-15 00:36:53.367676] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.442 [2024-05-15 00:36:53.367828] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.442 [2024-05-15 00:36:53.367843] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.442 [2024-05-15 00:36:53.367850] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.442 [2024-05-15 00:36:53.367857] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.442 [2024-05-15 00:36:53.367873] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.442 [2024-05-15 00:36:53.367882] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.442 [2024-05-15 00:36:53.367889] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.442 [2024-05-15 00:36:53.367899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.442 [2024-05-15 00:36:53.367920] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.442 [2024-05-15 00:36:53.371947] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.442 [2024-05-15 00:36:53.371964] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.442 [2024-05-15 00:36:53.371971] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.442 [2024-05-15 00:36:53.371978] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.442 [2024-05-15 00:36:53.371996] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:27.442 [2024-05-15 00:36:53.372005] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:27.442 [2024-05-15 00:36:53.372012] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4ac80) 00:21:27.442 [2024-05-15 00:36:53.372023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.442 [2024-05-15 00:36:53.372045] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaa260, cid 3, qid 0 00:21:27.442 [2024-05-15 00:36:53.372224] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:27.442 [2024-05-15 00:36:53.372236] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:27.442 [2024-05-15 00:36:53.372243] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:27.442 [2024-05-15 00:36:53.372249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xaaa260) on tqpair=0xa4ac80 00:21:27.442 [2024-05-15 00:36:53.372262] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:21:27.442 0 Kelvin (-273 Celsius) 00:21:27.442 Available Spare: 0% 00:21:27.442 Available Spare Threshold: 0% 00:21:27.442 Life Percentage Used: 0% 00:21:27.442 Data Units Read: 0 00:21:27.442 Data Units Written: 0 00:21:27.442 Host Read Commands: 0 00:21:27.442 Host Write Commands: 0 00:21:27.442 Controller Busy Time: 0 minutes 00:21:27.442 Power Cycles: 0 00:21:27.442 Power On Hours: 0 hours 00:21:27.442 Unsafe Shutdowns: 0 00:21:27.442 Unrecoverable Media Errors: 0 00:21:27.442 Lifetime Error Log Entries: 0 00:21:27.442 Warning Temperature Time: 0 minutes 00:21:27.442 Critical Temperature Time: 0 minutes 00:21:27.442 00:21:27.442 Number of Queues 00:21:27.442 ================ 00:21:27.442 Number of I/O Submission Queues: 127 00:21:27.442 Number of I/O Completion Queues: 127 00:21:27.442 00:21:27.442 Active Namespaces 00:21:27.442 ================= 00:21:27.442 Namespace ID:1 00:21:27.442 Error Recovery Timeout: Unlimited 00:21:27.442 Command Set Identifier: NVM (00h) 00:21:27.442 Deallocate: Supported 00:21:27.442 Deallocated/Unwritten Error: Not Supported 00:21:27.442 Deallocated Read Value: Unknown 00:21:27.442 Deallocate in Write Zeroes: Not Supported 00:21:27.442 Deallocated Guard Field: 0xFFFF 00:21:27.442 Flush: Supported 00:21:27.442 Reservation: Supported 00:21:27.442 Namespace Sharing Capabilities: Multiple Controllers 00:21:27.442 Size (in LBAs): 131072 (0GiB) 00:21:27.442 Capacity (in LBAs): 131072 (0GiB) 00:21:27.442 Utilization (in LBAs): 131072 (0GiB) 00:21:27.442 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:27.442 EUI64: ABCDEF0123456789 00:21:27.442 UUID: d859a8b4-a07c-418b-9509-dab0c51310d0 00:21:27.442 Thin Provisioning: Not Supported 00:21:27.442 Per-NS Atomic Units: Yes 00:21:27.442 Atomic Boundary Size (Normal): 0 00:21:27.442 Atomic Boundary Size (PFail): 0 00:21:27.442 Atomic Boundary Offset: 0 00:21:27.442 Maximum Single Source Range Length: 65535 00:21:27.442 Maximum Copy Length: 65535 00:21:27.442 Maximum Source Range Count: 1 00:21:27.442 NGUID/EUI64 Never Reused: No 00:21:27.442 Namespace Write Protected: No 00:21:27.442 Number of LBA Formats: 1 00:21:27.442 Current LBA Format: LBA Format #00 00:21:27.442 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:27.442 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.442 rmmod nvme_tcp 00:21:27.442 rmmod nvme_fabrics 00:21:27.442 rmmod nvme_keyring 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 938203 ']' 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 938203 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 938203 ']' 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 938203 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 938203 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 938203' 00:21:27.442 killing process with pid 938203 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # kill 938203 00:21:27.442 [2024-05-15 00:36:53.491858] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:27.442 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@971 -- # wait 938203 00:21:27.701 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.701 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:27.701 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:27.701 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.701 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.701 00:36:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.701 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.701 00:36:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.234 00:36:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.234 00:21:30.234 real 0m6.006s 00:21:30.234 user 0m4.995s 00:21:30.234 sys 0m2.225s 00:21:30.234 00:36:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:30.234 00:36:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:30.234 ************************************ 00:21:30.234 END TEST nvmf_identify 00:21:30.234 ************************************ 00:21:30.234 00:36:55 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:30.234 00:36:55 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:30.234 00:36:55 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:30.234 00:36:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.234 ************************************ 00:21:30.234 START TEST nvmf_perf 00:21:30.234 ************************************ 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:30.234 * Looking for test storage... 00:21:30.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.234 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.235 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.235 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.235 00:36:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.235 00:36:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.235 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:30.235 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:30.235 00:36:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.235 00:36:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:32.797 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:32.797 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:32.797 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:32.797 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.797 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:32.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:21:32.798 00:21:32.798 --- 10.0.0.2 ping statistics --- 00:21:32.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.798 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:21:32.798 00:21:32.798 --- 10.0.0.1 ping statistics --- 00:21:32.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.798 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=940582 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 940582 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 940582 ']' 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:32.798 00:36:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:32.798 [2024-05-15 00:36:58.551557] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:21:32.798 [2024-05-15 00:36:58.551636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.798 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.798 [2024-05-15 00:36:58.628874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.798 [2024-05-15 00:36:58.739934] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.798 [2024-05-15 00:36:58.739992] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.798 [2024-05-15 00:36:58.740007] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.798 [2024-05-15 00:36:58.740018] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.798 [2024-05-15 00:36:58.740028] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.798 [2024-05-15 00:36:58.740081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.798 [2024-05-15 00:36:58.740138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.798 [2024-05-15 00:36:58.740211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:32.798 [2024-05-15 00:36:58.740214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.364 00:36:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:33.364 00:36:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:21:33.364 00:36:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.364 00:36:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:33.364 00:36:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:33.622 00:36:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.622 00:36:59 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:33.622 00:36:59 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:36.906 00:37:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:36.906 00:37:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:36.906 00:37:02 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:21:36.906 00:37:02 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:37.164 00:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:37.164 00:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:21:37.164 00:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:37.164 00:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:37.164 00:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:37.422 [2024-05-15 00:37:03.379900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.422 00:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:37.680 00:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:37.680 00:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.938 00:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:37.938 00:37:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:38.194 00:37:04 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.451 [2024-05-15 00:37:04.359272] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:38.451 [2024-05-15 00:37:04.359585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.451 00:37:04 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:38.708 00:37:04 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:21:38.708 00:37:04 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:38.709 00:37:04 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:38.709 00:37:04 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:40.082 Initializing NVMe Controllers 00:21:40.082 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:21:40.082 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:21:40.082 Initialization complete. Launching workers. 00:21:40.082 ======================================================== 00:21:40.082 Latency(us) 00:21:40.082 Device Information : IOPS MiB/s Average min max 00:21:40.082 PCIE (0000:88:00.0) NSID 1 from core 0: 84482.45 330.01 378.12 38.19 5715.47 00:21:40.082 ======================================================== 00:21:40.082 Total : 84482.45 330.01 378.12 38.19 5715.47 00:21:40.082 00:21:40.082 00:37:05 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.082 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.452 Initializing NVMe Controllers 00:21:41.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:41.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:41.452 Initialization complete. Launching workers. 00:21:41.452 ======================================================== 00:21:41.452 Latency(us) 00:21:41.452 Device Information : IOPS MiB/s Average min max 00:21:41.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 120.57 0.47 8491.26 216.96 45755.19 00:21:41.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.81 0.21 19026.71 5994.59 47899.63 00:21:41.452 ======================================================== 00:21:41.452 Total : 174.38 0.68 11742.20 216.96 47899.63 00:21:41.452 00:21:41.452 00:37:07 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:41.452 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.386 Initializing NVMe Controllers 00:21:42.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:42.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:42.386 Initialization complete. Launching workers. 00:21:42.386 ======================================================== 00:21:42.386 Latency(us) 00:21:42.386 Device Information : IOPS MiB/s Average min max 00:21:42.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8179.23 31.95 3912.40 617.76 10093.92 00:21:42.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3874.37 15.13 8260.48 4183.03 16982.53 00:21:42.386 ======================================================== 00:21:42.386 Total : 12053.60 47.08 5309.99 617.76 16982.53 00:21:42.386 00:21:42.386 00:37:08 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:42.386 00:37:08 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:42.386 00:37:08 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:42.643 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.173 Initializing NVMe Controllers 00:21:45.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.173 Controller IO queue size 128, less than required. 00:21:45.173 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.173 Controller IO queue size 128, less than required. 00:21:45.173 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:45.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:45.173 Initialization complete. Launching workers. 00:21:45.173 ======================================================== 00:21:45.173 Latency(us) 00:21:45.173 Device Information : IOPS MiB/s Average min max 00:21:45.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 681.44 170.36 197565.80 104683.88 280571.79 00:21:45.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 563.45 140.86 233268.76 85728.79 373515.59 00:21:45.173 ======================================================== 00:21:45.173 Total : 1244.89 311.22 213725.33 85728.79 373515.59 00:21:45.173 00:21:45.173 00:37:11 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:45.173 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.431 No valid NVMe controllers or AIO or URING devices found 00:21:45.431 Initializing NVMe Controllers 00:21:45.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.431 Controller IO queue size 128, less than required. 00:21:45.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.431 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:45.431 Controller IO queue size 128, less than required. 00:21:45.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.431 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:45.431 WARNING: Some requested NVMe devices were skipped 00:21:45.431 00:37:11 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:45.431 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.710 Initializing NVMe Controllers 00:21:48.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:48.710 Controller IO queue size 128, less than required. 00:21:48.710 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:48.710 Controller IO queue size 128, less than required. 00:21:48.710 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:48.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:48.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:48.710 Initialization complete. Launching workers. 00:21:48.710 00:21:48.710 ==================== 00:21:48.710 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:48.710 TCP transport: 00:21:48.710 polls: 34929 00:21:48.710 idle_polls: 10422 00:21:48.710 sock_completions: 24507 00:21:48.710 nvme_completions: 3565 00:21:48.710 submitted_requests: 5374 00:21:48.710 queued_requests: 1 00:21:48.710 00:21:48.710 ==================== 00:21:48.710 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:48.710 TCP transport: 00:21:48.710 polls: 38557 00:21:48.710 idle_polls: 13995 00:21:48.710 sock_completions: 24562 00:21:48.710 nvme_completions: 3475 00:21:48.710 submitted_requests: 5174 00:21:48.710 queued_requests: 1 00:21:48.710 ======================================================== 00:21:48.710 Latency(us) 00:21:48.710 Device Information : IOPS MiB/s Average min max 00:21:48.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 890.04 222.51 147894.96 80821.00 206605.95 00:21:48.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 867.57 216.89 151168.69 89676.75 246508.57 00:21:48.710 ======================================================== 00:21:48.710 Total : 1757.61 439.40 149510.89 80821.00 246508.57 00:21:48.710 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.710 rmmod nvme_tcp 00:21:48.710 rmmod nvme_fabrics 00:21:48.710 rmmod nvme_keyring 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 940582 ']' 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 940582 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 940582 ']' 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 940582 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 940582 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 940582' 00:21:48.710 killing process with pid 940582 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # kill 940582 00:21:48.710 [2024-05-15 00:37:14.552869] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:48.710 00:37:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@971 -- # wait 940582 00:21:50.142 00:37:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.142 00:37:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.142 00:37:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.142 00:37:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.142 00:37:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.142 00:37:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.142 00:37:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.142 00:37:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.675 00:37:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:52.675 00:21:52.675 real 0m22.328s 00:21:52.675 user 1m8.709s 00:21:52.675 sys 0m5.355s 00:21:52.675 00:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:52.675 00:37:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:52.675 ************************************ 00:21:52.675 END TEST nvmf_perf 00:21:52.675 ************************************ 00:21:52.675 00:37:18 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:52.675 00:37:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:52.675 00:37:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:52.675 00:37:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:52.675 ************************************ 00:21:52.675 START TEST nvmf_fio_host 00:21:52.675 ************************************ 00:21:52.675 00:37:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:52.675 * Looking for test storage... 00:21:52.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:52.675 00:37:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.675 00:37:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.675 00:37:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.675 00:37:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.675 00:37:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.676 00:37:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:55.209 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:55.209 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:55.209 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:55.209 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.209 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:21:55.210 00:21:55.210 --- 10.0.0.2 ping statistics --- 00:21:55.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.210 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:21:55.210 00:21:55.210 --- 10.0.0.1 ping statistics --- 00:21:55.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.210 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=944961 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 944961 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 944961 ']' 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:55.210 00:37:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.210 [2024-05-15 00:37:21.000327] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:21:55.210 [2024-05-15 00:37:21.000398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.210 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.210 [2024-05-15 00:37:21.077056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.210 [2024-05-15 00:37:21.188078] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.210 [2024-05-15 00:37:21.188134] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.210 [2024-05-15 00:37:21.188154] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.210 [2024-05-15 00:37:21.188166] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.210 [2024-05-15 00:37:21.188175] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.210 [2024-05-15 00:37:21.188230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.210 [2024-05-15 00:37:21.188287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.210 [2024-05-15 00:37:21.188353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.210 [2024-05-15 00:37:21.188356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.210 [2024-05-15 00:37:21.320751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.210 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.468 Malloc1 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.468 [2024-05-15 00:37:21.398585] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:55.468 [2024-05-15 00:37:21.398874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:55.468 00:37:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:55.727 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:55.727 fio-3.35 00:21:55.727 Starting 1 thread 00:21:55.727 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.254 00:21:58.254 test: (groupid=0, jobs=1): err= 0: pid=945178: Wed May 15 00:37:24 2024 00:21:58.254 read: IOPS=8015, BW=31.3MiB/s (32.8MB/s)(62.8MiB/2007msec) 00:21:58.254 slat (nsec): min=1963, max=174274, avg=2513.38, stdev=1995.40 00:21:58.254 clat (usec): min=4563, max=15029, avg=8839.44, stdev=690.01 00:21:58.254 lat (usec): min=4595, max=15032, avg=8841.96, stdev=689.91 00:21:58.254 clat percentiles (usec): 00:21:58.254 | 1.00th=[ 7308], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8291], 00:21:58.254 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:21:58.254 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9896], 00:21:58.255 | 99.00th=[10552], 99.50th=[10814], 99.90th=[12911], 99.95th=[13698], 00:21:58.255 | 99.99th=[15008] 00:21:58.255 bw ( KiB/s): min=30728, max=32728, per=99.89%, avg=32028.00, stdev=894.70, samples=4 00:21:58.255 iops : min= 7682, max= 8182, avg=8007.00, stdev=223.68, samples=4 00:21:58.255 write: IOPS=7991, BW=31.2MiB/s (32.7MB/s)(62.7MiB/2007msec); 0 zone resets 00:21:58.255 slat (usec): min=2, max=132, avg= 2.69, stdev= 1.43 00:21:58.255 clat (usec): min=1461, max=12797, avg=7037.59, stdev=581.12 00:21:58.255 lat (usec): min=1470, max=12800, avg=7040.28, stdev=581.11 00:21:58.255 clat percentiles (usec): 00:21:58.255 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 6587], 00:21:58.255 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:21:58.255 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7701], 95.00th=[ 7898], 00:21:58.255 | 99.00th=[ 8356], 99.50th=[ 8586], 99.90th=[11076], 99.95th=[12125], 00:21:58.255 | 99.99th=[12649] 00:21:58.255 bw ( KiB/s): min=31752, max=32168, per=100.00%, avg=31970.00, stdev=208.76, samples=4 00:21:58.255 iops : min= 7938, max= 8042, avg=7992.50, stdev=52.19, samples=4 00:21:58.255 lat (msec) : 2=0.01%, 4=0.04%, 10=98.00%, 20=1.96% 00:21:58.255 cpu : usr=49.60%, sys=41.43%, ctx=75, majf=0, minf=5 00:21:58.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:58.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:58.255 issued rwts: total=16088,16039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:58.255 00:21:58.255 Run status group 0 (all jobs): 00:21:58.255 READ: bw=31.3MiB/s (32.8MB/s), 31.3MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=62.8MiB (65.9MB), run=2007-2007msec 00:21:58.255 WRITE: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=62.7MiB (65.7MB), run=2007-2007msec 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:58.255 00:37:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:58.255 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:58.255 fio-3.35 00:21:58.255 Starting 1 thread 00:21:58.255 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.784 00:22:00.784 test: (groupid=0, jobs=1): err= 0: pid=945517: Wed May 15 00:37:26 2024 00:22:00.784 read: IOPS=8017, BW=125MiB/s (131MB/s)(252MiB/2008msec) 00:22:00.784 slat (nsec): min=2851, max=91539, avg=3549.31, stdev=1487.61 00:22:00.784 clat (usec): min=2570, max=19853, avg=9566.63, stdev=2394.14 00:22:00.784 lat (usec): min=2573, max=19857, avg=9570.18, stdev=2394.20 00:22:00.784 clat percentiles (usec): 00:22:00.784 | 1.00th=[ 4883], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7439], 00:22:00.784 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10159], 00:22:00.784 | 70.00th=[10814], 80.00th=[11600], 90.00th=[12780], 95.00th=[13698], 00:22:00.784 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16319], 99.95th=[16450], 00:22:00.784 | 99.99th=[17171] 00:22:00.784 bw ( KiB/s): min=58592, max=75232, per=51.38%, avg=65912.00, stdev=7392.83, samples=4 00:22:00.784 iops : min= 3662, max= 4702, avg=4119.50, stdev=462.05, samples=4 00:22:00.784 write: IOPS=4544, BW=71.0MiB/s (74.5MB/s)(134MiB/1890msec); 0 zone resets 00:22:00.784 slat (usec): min=30, max=167, avg=33.48, stdev= 4.79 00:22:00.784 clat (usec): min=6343, max=17900, avg=11108.59, stdev=1850.01 00:22:00.784 lat (usec): min=6375, max=17931, avg=11142.07, stdev=1850.29 00:22:00.784 clat percentiles (usec): 00:22:00.784 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9503], 00:22:00.784 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10814], 60.00th=[11469], 00:22:00.784 | 70.00th=[12125], 80.00th=[12649], 90.00th=[13566], 95.00th=[14484], 00:22:00.784 | 99.00th=[15926], 99.50th=[16319], 99.90th=[17433], 99.95th=[17695], 00:22:00.784 | 99.99th=[17957] 00:22:00.784 bw ( KiB/s): min=60416, max=77536, per=93.80%, avg=68200.00, stdev=7755.56, samples=4 00:22:00.784 iops : min= 3776, max= 4846, avg=4262.50, stdev=484.72, samples=4 00:22:00.784 lat (msec) : 4=0.13%, 10=48.48%, 20=51.39% 00:22:00.784 cpu : usr=74.99%, sys=20.78%, ctx=26, majf=0, minf=1 00:22:00.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:00.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:00.784 issued rwts: total=16100,8589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:00.784 00:22:00.784 Run status group 0 (all jobs): 00:22:00.784 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=252MiB (264MB), run=2008-2008msec 00:22:00.784 WRITE: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=134MiB (141MB), run=1890-1890msec 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.784 rmmod nvme_tcp 00:22:00.784 rmmod nvme_fabrics 00:22:00.784 rmmod nvme_keyring 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 944961 ']' 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 944961 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 944961 ']' 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 944961 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 944961 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 944961' 00:22:00.784 killing process with pid 944961 00:22:00.784 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 944961 00:22:00.785 [2024-05-15 00:37:26.790428] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:00.785 00:37:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 944961 00:22:01.043 00:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:01.043 00:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:01.043 00:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:01.043 00:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:01.043 00:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:01.043 00:37:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.043 00:37:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.043 00:37:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.581 00:37:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:03.581 00:22:03.581 real 0m10.859s 00:22:03.581 user 0m27.379s 00:22:03.581 sys 0m4.146s 00:22:03.581 00:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:03.581 00:37:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.581 ************************************ 00:22:03.581 END TEST nvmf_fio_host 00:22:03.581 ************************************ 00:22:03.581 00:37:29 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:03.581 00:37:29 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:03.581 00:37:29 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:03.581 00:37:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:03.581 ************************************ 00:22:03.581 START TEST nvmf_failover 00:22:03.581 ************************************ 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:03.582 * Looking for test storage... 00:22:03.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:03.582 00:37:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:06.122 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:06.122 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:06.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:06.122 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.122 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:06.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:22:06.123 00:22:06.123 --- 10.0.0.2 ping statistics --- 00:22:06.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.123 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:22:06.123 00:22:06.123 --- 10.0.0.1 ping statistics --- 00:22:06.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.123 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=948007 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 948007 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 948007 ']' 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:06.123 00:37:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:06.123 [2024-05-15 00:37:31.891876] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:06.123 [2024-05-15 00:37:31.891987] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.123 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.123 [2024-05-15 00:37:31.975026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:06.123 [2024-05-15 00:37:32.091529] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.123 [2024-05-15 00:37:32.091599] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.123 [2024-05-15 00:37:32.091614] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.123 [2024-05-15 00:37:32.091628] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.123 [2024-05-15 00:37:32.091639] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.123 [2024-05-15 00:37:32.091733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.123 [2024-05-15 00:37:32.091847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.123 [2024-05-15 00:37:32.091850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.688 00:37:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:06.688 00:37:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:22:06.688 00:37:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.688 00:37:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:06.688 00:37:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:06.946 00:37:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.946 00:37:32 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:06.946 [2024-05-15 00:37:33.096369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.204 00:37:33 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:07.462 Malloc0 00:22:07.462 00:37:33 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.720 00:37:33 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.720 00:37:33 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.977 [2024-05-15 00:37:34.095938] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:07.977 [2024-05-15 00:37:34.096220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.977 00:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:08.235 [2024-05-15 00:37:34.348872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:08.235 00:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:08.516 [2024-05-15 00:37:34.589816] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:08.516 00:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=948416 00:22:08.516 00:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:08.516 00:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:08.516 00:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 948416 /var/tmp/bdevperf.sock 00:22:08.516 00:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 948416 ']' 00:22:08.516 00:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.516 00:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:08.516 00:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.516 00:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:08.516 00:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:09.451 00:37:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:09.451 00:37:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:22:09.451 00:37:35 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:10.017 NVMe0n1 00:22:10.017 00:37:35 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:10.275 00:22:10.275 00:37:36 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=948592 00:22:10.275 00:37:36 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:10.275 00:37:36 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:11.650 00:37:37 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.650 [2024-05-15 00:37:37.679682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.679988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 [2024-05-15 00:37:37.680229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1ea0 is same with the state(5) to be set 00:22:11.650 00:37:37 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:14.935 00:37:40 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.935 00:22:14.935 00:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:15.193 [2024-05-15 00:37:41.306828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.306910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.306925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.306965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.306979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.306992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.307004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.307017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.307028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.307040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.307052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.307064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.307076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 [2024-05-15 00:37:41.307089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b2730 is same with the state(5) to be set 00:22:15.193 00:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:18.476 00:37:44 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.476 [2024-05-15 00:37:44.559250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.476 00:37:44 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:19.852 00:37:45 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:19.852 00:37:45 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 948592 00:22:26.420 0 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 948416 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 948416 ']' 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 948416 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 948416 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 948416' 00:22:26.420 killing process with pid 948416 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 948416 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 948416 00:22:26.420 00:37:51 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:26.420 [2024-05-15 00:37:34.652022] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:26.420 [2024-05-15 00:37:34.652114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948416 ] 00:22:26.420 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.420 [2024-05-15 00:37:34.722862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.420 [2024-05-15 00:37:34.834331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.420 Running I/O for 15 seconds... 00:22:26.420 [2024-05-15 00:37:37.681379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.681974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.681989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.420 [2024-05-15 00:37:37.682435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.420 [2024-05-15 00:37:37.682449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.421 [2024-05-15 00:37:37.682817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.682846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.682877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.682905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.682953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.682971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.682985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.421 [2024-05-15 00:37:37.683848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.421 [2024-05-15 00:37:37.683863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.683876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.683893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.683907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.683922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.683944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.683960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.683974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.683989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.684003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.684035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.684064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.684093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.684121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.684150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.684179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.684207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.684235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.422 [2024-05-15 00:37:37.684264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77576 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.684946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.684962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.684973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.684984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.685004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.685017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.685028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.685040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.685053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.685066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.685077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.685088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.685101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.685114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.685125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.685136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.685149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.685161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.685172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.685184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.685196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.685209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.685223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.685250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.685263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.685282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.685293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.422 [2024-05-15 00:37:37.685304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:22:26.422 [2024-05-15 00:37:37.685316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.422 [2024-05-15 00:37:37.685328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.422 [2024-05-15 00:37:37.685339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.685912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.423 [2024-05-15 00:37:37.685922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.423 [2024-05-15 00:37:37.685955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77136 len:8 PRP1 0x0 PRP2 0x0 00:22:26.423 [2024-05-15 00:37:37.685970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.686036] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd245a0 was disconnected and freed. reset controller. 00:22:26.423 [2024-05-15 00:37:37.686062] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:26.423 [2024-05-15 00:37:37.686101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.423 [2024-05-15 00:37:37.686120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.686136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.423 [2024-05-15 00:37:37.686150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.686164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.423 [2024-05-15 00:37:37.686178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.686191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.423 [2024-05-15 00:37:37.686204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:37.686217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.423 [2024-05-15 00:37:37.689581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.423 [2024-05-15 00:37:37.689620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd052f0 (9): Bad file descriptor 00:22:26.423 [2024-05-15 00:37:37.718779] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:26.423 [2024-05-15 00:37:41.307370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.423 [2024-05-15 00:37:41.307429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:41.307455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.423 [2024-05-15 00:37:41.307471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:41.307487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.423 [2024-05-15 00:37:41.307501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:41.307516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.423 [2024-05-15 00:37:41.307530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:41.307544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.423 [2024-05-15 00:37:41.307558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:41.307573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.423 [2024-05-15 00:37:41.307587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:41.307616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.423 [2024-05-15 00:37:41.307630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:41.307645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.423 [2024-05-15 00:37:41.307668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:41.307683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.423 [2024-05-15 00:37:41.307697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:41.307711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.423 [2024-05-15 00:37:41.307724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:41.307739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.423 [2024-05-15 00:37:41.307752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.423 [2024-05-15 00:37:41.307766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.424 [2024-05-15 00:37:41.307779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.307793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.424 [2024-05-15 00:37:41.307806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.307821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.424 [2024-05-15 00:37:41.307834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.307848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.424 [2024-05-15 00:37:41.307862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.307876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.424 [2024-05-15 00:37:41.307890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.307905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.424 [2024-05-15 00:37:41.307942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.307959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.307989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.424 [2024-05-15 00:37:41.308250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.308983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.308997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.309013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.309027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.309042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.309056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.309071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.309085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.309101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.309114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.309130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.309143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.309159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.309173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.309188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.309201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.309216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.309234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.309266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.424 [2024-05-15 00:37:41.309280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.424 [2024-05-15 00:37:41.309295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.309970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.309983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.425 [2024-05-15 00:37:41.310709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.425 [2024-05-15 00:37:41.310724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.310737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.310757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.310771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.310786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.310800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.310829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.310843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.310858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.310872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.310886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.310900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.310937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:41.310954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.310969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:41.310983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.310999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:41.311013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:41.311042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:41.311071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:41.311100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:41.311130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.311162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.311196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.311240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.311269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.311297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.311325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:41.311354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd265d0 is same with the state(5) to be set 00:22:26.426 [2024-05-15 00:37:41.311387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.426 [2024-05-15 00:37:41.311398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.426 [2024-05-15 00:37:41.311409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85160 len:8 PRP1 0x0 PRP2 0x0 00:22:26.426 [2024-05-15 00:37:41.311430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311497] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd265d0 was disconnected and freed. reset controller. 00:22:26.426 [2024-05-15 00:37:41.311514] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:26.426 [2024-05-15 00:37:41.311560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.426 [2024-05-15 00:37:41.311580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.426 [2024-05-15 00:37:41.311608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.426 [2024-05-15 00:37:41.311636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.426 [2024-05-15 00:37:41.311667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:41.311682] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.426 [2024-05-15 00:37:41.315032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.426 [2024-05-15 00:37:41.315074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd052f0 (9): Bad file descriptor 00:22:26.426 [2024-05-15 00:37:41.360176] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:26.426 [2024-05-15 00:37:45.810203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:45.810536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.426 [2024-05-15 00:37:45.810565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.426 [2024-05-15 00:37:45.810842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.426 [2024-05-15 00:37:45.810855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.810869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.810882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.810897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.810910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.810924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.810962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.810983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.810998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.811979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.811993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.812009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.812022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.812037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.812051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.812066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.812079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.427 [2024-05-15 00:37:45.812098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.427 [2024-05-15 00:37:45.812112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.428 [2024-05-15 00:37:45.812140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.428 [2024-05-15 00:37:45.812170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.428 [2024-05-15 00:37:45.812199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.428 [2024-05-15 00:37:45.812228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.428 [2024-05-15 00:37:45.812273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.428 [2024-05-15 00:37:45.812301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.428 [2024-05-15 00:37:45.812328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.428 [2024-05-15 00:37:45.812356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.428 [2024-05-15 00:37:45.812383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.428 [2024-05-15 00:37:45.812411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.428 [2024-05-15 00:37:45.812438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.428 [2024-05-15 00:37:45.812474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.428 [2024-05-15 00:37:45.812503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.428 [2024-05-15 00:37:45.812531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.428 [2024-05-15 00:37:45.812559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.428 [2024-05-15 00:37:45.812587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.428 [2024-05-15 00:37:45.812614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.812658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20440 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.812671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.812702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.812713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.812726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.812750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.812761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20456 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.812773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.812797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.812808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20464 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.812820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.812843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.812854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20472 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.812870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.812894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.812905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.812917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.812954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.812969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.812980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20488 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.812993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.813017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.813028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20496 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.813041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.813065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.813076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20504 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.813089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.813112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.813124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.813136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.813160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.813172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20520 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.813184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.813208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.813219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20528 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.813233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.813275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.813287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20536 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.813299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.813322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.813333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.813345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.813368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.813379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20552 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.813391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.813414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.813424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20560 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.813437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.813460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.813470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20568 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.813483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.428 [2024-05-15 00:37:45.813506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.428 [2024-05-15 00:37:45.813517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:8 PRP1 0x0 PRP2 0x0 00:22:26.428 [2024-05-15 00:37:45.813530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.428 [2024-05-15 00:37:45.813542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.813553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.813564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20584 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.813576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.813588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.813598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.813609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20592 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.813621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.813637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.813647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.813658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20600 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.813671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.813683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.813693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.813704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.813716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.813728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.813739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.813750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20616 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.813762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.813774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.813785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.813795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20624 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.813808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.813820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.813831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.813841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20632 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.813854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.813866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.813877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.813888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.813900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.813913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.813923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.813955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20648 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.813970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.813984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.813995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19752 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19760 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19768 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19776 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19784 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19792 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19800 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20656 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20664 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20680 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20688 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20696 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19816 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19824 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19832 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.429 [2024-05-15 00:37:45.814899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19848 len:8 PRP1 0x0 PRP2 0x0 00:22:26.429 [2024-05-15 00:37:45.814912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.429 [2024-05-15 00:37:45.814924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.429 [2024-05-15 00:37:45.814956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.430 [2024-05-15 00:37:45.814968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19856 len:8 PRP1 0x0 PRP2 0x0 00:22:26.430 [2024-05-15 00:37:45.814981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.430 [2024-05-15 00:37:45.814994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.430 [2024-05-15 00:37:45.815005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.430 [2024-05-15 00:37:45.815016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19864 len:8 PRP1 0x0 PRP2 0x0 00:22:26.430 [2024-05-15 00:37:45.815034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.430 [2024-05-15 00:37:45.815047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.430 [2024-05-15 00:37:45.815058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.430 [2024-05-15 00:37:45.815069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:8 PRP1 0x0 PRP2 0x0 00:22:26.430 [2024-05-15 00:37:45.815081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.430 [2024-05-15 00:37:45.815139] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xecf260 was disconnected and freed. reset controller. 00:22:26.430 [2024-05-15 00:37:45.815161] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:26.430 [2024-05-15 00:37:45.815194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.430 [2024-05-15 00:37:45.815211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.430 [2024-05-15 00:37:45.815226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.430 [2024-05-15 00:37:45.815239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.430 [2024-05-15 00:37:45.815253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.430 [2024-05-15 00:37:45.815269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.430 [2024-05-15 00:37:45.815283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.430 [2024-05-15 00:37:45.815296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.430 [2024-05-15 00:37:45.815309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.430 [2024-05-15 00:37:45.818613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.430 [2024-05-15 00:37:45.818652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd052f0 (9): Bad file descriptor 00:22:26.430 [2024-05-15 00:37:45.942606] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:26.430 00:22:26.430 Latency(us) 00:22:26.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.430 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:26.430 Verification LBA range: start 0x0 length 0x4000 00:22:26.430 NVMe0n1 : 15.02 8733.82 34.12 500.08 0.00 13833.89 788.86 21262.79 00:22:26.430 =================================================================================================================== 00:22:26.430 Total : 8733.82 34.12 500.08 0.00 13833.89 788.86 21262.79 00:22:26.430 Received shutdown signal, test time was about 15.000000 seconds 00:22:26.430 00:22:26.430 Latency(us) 00:22:26.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.430 =================================================================================================================== 00:22:26.430 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=950403 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 950403 /var/tmp/bdevperf.sock 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 950403 ']' 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:26.430 00:37:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:26.430 00:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:26.430 00:37:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:22:26.430 00:37:52 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:26.430 [2024-05-15 00:37:52.451158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:26.430 00:37:52 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:26.687 [2024-05-15 00:37:52.695803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:26.687 00:37:52 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:26.945 NVMe0n1 00:22:26.945 00:37:53 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.510 00:22:27.510 00:37:53 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.767 00:22:27.767 00:37:53 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.767 00:37:53 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:28.024 00:37:54 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:28.281 00:37:54 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:31.605 00:37:57 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:31.605 00:37:57 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:31.605 00:37:57 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=951071 00:22:31.605 00:37:57 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:31.605 00:37:57 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 951071 00:22:32.539 0 00:22:32.539 00:37:58 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:32.539 [2024-05-15 00:37:51.938355] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:32.539 [2024-05-15 00:37:51.938452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950403 ] 00:22:32.539 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.539 [2024-05-15 00:37:52.007887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.539 [2024-05-15 00:37:52.112848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.539 [2024-05-15 00:37:54.237721] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:32.539 [2024-05-15 00:37:54.237814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.539 [2024-05-15 00:37:54.237837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.539 [2024-05-15 00:37:54.237854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.539 [2024-05-15 00:37:54.237884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.539 [2024-05-15 00:37:54.237898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.539 [2024-05-15 00:37:54.237910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.539 [2024-05-15 00:37:54.237925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.539 [2024-05-15 00:37:54.237945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.539 [2024-05-15 00:37:54.237960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:32.539 [2024-05-15 00:37:54.237998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:32.539 [2024-05-15 00:37:54.238029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e32f0 (9): Bad file descriptor 00:22:32.539 [2024-05-15 00:37:54.293668] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:32.539 Running I/O for 1 seconds... 00:22:32.539 00:22:32.539 Latency(us) 00:22:32.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.539 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:32.539 Verification LBA range: start 0x0 length 0x4000 00:22:32.539 NVMe0n1 : 1.01 8975.30 35.06 0.00 0.00 14198.90 2852.03 15340.28 00:22:32.539 =================================================================================================================== 00:22:32.539 Total : 8975.30 35.06 0.00 0.00 14198.90 2852.03 15340.28 00:22:32.539 00:37:58 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:32.539 00:37:58 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:33.106 00:37:58 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:33.106 00:37:59 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:33.106 00:37:59 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:33.364 00:37:59 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:33.621 00:37:59 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:36.904 00:38:02 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:36.904 00:38:02 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:36.904 00:38:03 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 950403 00:22:36.904 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 950403 ']' 00:22:36.904 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 950403 00:22:36.904 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:22:36.904 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:36.904 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 950403 00:22:37.162 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:37.162 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:37.163 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 950403' 00:22:37.163 killing process with pid 950403 00:22:37.163 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 950403 00:22:37.163 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 950403 00:22:37.422 00:38:03 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:37.422 00:38:03 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:37.681 rmmod nvme_tcp 00:22:37.681 rmmod nvme_fabrics 00:22:37.681 rmmod nvme_keyring 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 948007 ']' 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 948007 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 948007 ']' 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 948007 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 948007 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 948007' 00:22:37.681 killing process with pid 948007 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 948007 00:22:37.681 [2024-05-15 00:38:03.668691] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:37.681 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 948007 00:22:37.941 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:37.941 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:37.941 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:37.941 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.941 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:37.941 00:38:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.941 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.941 00:38:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.845 00:38:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:40.104 00:22:40.104 real 0m36.810s 00:22:40.104 user 2m7.600s 00:22:40.104 sys 0m6.675s 00:22:40.104 00:38:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:40.104 00:38:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.104 ************************************ 00:22:40.104 END TEST nvmf_failover 00:22:40.104 ************************************ 00:22:40.104 00:38:06 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:40.104 00:38:06 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:40.104 00:38:06 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:40.104 00:38:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:40.104 ************************************ 00:22:40.104 START TEST nvmf_host_discovery 00:22:40.104 ************************************ 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:40.104 * Looking for test storage... 00:22:40.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.104 00:38:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:40.105 00:38:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:42.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:42.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:42.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:42.637 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:42.638 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:42.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:22:42.638 00:22:42.638 --- 10.0.0.2 ping statistics --- 00:22:42.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.638 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:22:42.638 00:22:42.638 --- 10.0.0.1 ping statistics --- 00:22:42.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.638 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=954094 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 954094 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 954094 ']' 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:42.638 00:38:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.638 [2024-05-15 00:38:08.771657] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:42.638 [2024-05-15 00:38:08.771748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.896 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.896 [2024-05-15 00:38:08.850844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.896 [2024-05-15 00:38:08.957845] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.896 [2024-05-15 00:38:08.957904] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.896 [2024-05-15 00:38:08.957927] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.896 [2024-05-15 00:38:08.957962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.896 [2024-05-15 00:38:08.957972] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.896 [2024-05-15 00:38:08.957999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.829 [2024-05-15 00:38:09.737948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.829 [2024-05-15 00:38:09.745871] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:43.829 [2024-05-15 00:38:09.746176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:43.829 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.830 null0 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.830 null1 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=954242 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 954242 /tmp/host.sock 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 954242 ']' 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:43.830 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:43.830 00:38:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.830 [2024-05-15 00:38:09.821131] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:43.830 [2024-05-15 00:38:09.821201] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954242 ] 00:22:43.830 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.830 [2024-05-15 00:38:09.898485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.088 [2024-05-15 00:38:10.015912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.088 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.346 [2024-05-15 00:38:10.435996] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:44.346 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == \n\v\m\e\0 ]] 00:22:44.604 00:38:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:22:45.170 [2024-05-15 00:38:11.207187] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:45.170 [2024-05-15 00:38:11.207232] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:45.170 [2024-05-15 00:38:11.207258] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.428 [2024-05-15 00:38:11.334694] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:45.428 [2024-05-15 00:38:11.516885] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:45.428 [2024-05-15 00:38:11.516913] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:45.686 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0 ]] 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.687 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.946 [2024-05-15 00:38:11.864215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:45.946 [2024-05-15 00:38:11.865268] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:45.946 [2024-05-15 00:38:11.865299] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.946 [2024-05-15 00:38:11.993150] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:45.946 00:38:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:22:46.233 [2024-05-15 00:38:12.131999] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:46.233 [2024-05-15 00:38:12.132021] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:46.233 [2024-05-15 00:38:12.132030] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.171 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.172 [2024-05-15 00:38:13.097058] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:47.172 [2024-05-15 00:38:13.097102] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:47.172 [2024-05-15 00:38:13.097809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.172 [2024-05-15 00:38:13.097845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.172 [2024-05-15 00:38:13.097864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.172 [2024-05-15 00:38:13.097880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.172 [2024-05-15 00:38:13.097896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.172 [2024-05-15 00:38:13.097911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.172 [2024-05-15 00:38:13.097928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.172 [2024-05-15 00:38:13.097952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.172 [2024-05-15 00:38:13.097990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be3900 is same with the state(5) to be set 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:47.172 [2024-05-15 00:38:13.107805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be3900 (9): Bad file descriptor 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.172 [2024-05-15 00:38:13.117854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:47.172 [2024-05-15 00:38:13.118184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.172 [2024-05-15 00:38:13.118355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.172 [2024-05-15 00:38:13.118382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be3900 with addr=10.0.0.2, port=4420 00:22:47.172 [2024-05-15 00:38:13.118400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be3900 is same with the state(5) to be set 00:22:47.172 [2024-05-15 00:38:13.118424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be3900 (9): Bad file descriptor 00:22:47.172 [2024-05-15 00:38:13.118460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:47.172 [2024-05-15 00:38:13.118479] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:47.172 [2024-05-15 00:38:13.118497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:47.172 [2024-05-15 00:38:13.118518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.172 [2024-05-15 00:38:13.127945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:47.172 [2024-05-15 00:38:13.128224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.172 [2024-05-15 00:38:13.128451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.172 [2024-05-15 00:38:13.128480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be3900 with addr=10.0.0.2, port=4420 00:22:47.172 [2024-05-15 00:38:13.128498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be3900 is same with the state(5) to be set 00:22:47.172 [2024-05-15 00:38:13.128523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be3900 (9): Bad file descriptor 00:22:47.172 [2024-05-15 00:38:13.128560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:47.172 [2024-05-15 00:38:13.128580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:47.172 [2024-05-15 00:38:13.128595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:47.172 [2024-05-15 00:38:13.128618] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.172 [2024-05-15 00:38:13.138032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:47.172 [2024-05-15 00:38:13.138336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.172 [2024-05-15 00:38:13.138557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.172 [2024-05-15 00:38:13.138586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be3900 with addr=10.0.0.2, port=4420 00:22:47.172 [2024-05-15 00:38:13.138604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be3900 is same with the state(5) to be set 00:22:47.172 [2024-05-15 00:38:13.138629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be3900 (9): Bad file descriptor 00:22:47.172 [2024-05-15 00:38:13.138666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:47.172 [2024-05-15 00:38:13.138685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:47.172 [2024-05-15 00:38:13.138700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:47.172 [2024-05-15 00:38:13.138723] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:47.172 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:47.172 [2024-05-15 00:38:13.148102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:47.172 [2024-05-15 00:38:13.148411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.172 [2024-05-15 00:38:13.148637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.172 [2024-05-15 00:38:13.148667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be3900 with addr=10.0.0.2, port=4420 00:22:47.172 [2024-05-15 00:38:13.148686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be3900 is same with the state(5) to be set 00:22:47.172 [2024-05-15 00:38:13.148712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be3900 (9): Bad file descriptor 00:22:47.172 [2024-05-15 00:38:13.148750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:47.172 [2024-05-15 00:38:13.148770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:47.172 [2024-05-15 00:38:13.148786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:47.172 [2024-05-15 00:38:13.148808] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.172 [2024-05-15 00:38:13.158179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:47.172 [2024-05-15 00:38:13.158444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.173 [2024-05-15 00:38:13.158634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.173 [2024-05-15 00:38:13.158665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be3900 with addr=10.0.0.2, port=4420 00:22:47.173 [2024-05-15 00:38:13.158684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be3900 is same with the state(5) to be set 00:22:47.173 [2024-05-15 00:38:13.158709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be3900 (9): Bad file descriptor 00:22:47.173 [2024-05-15 00:38:13.158733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:47.173 [2024-05-15 00:38:13.158750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:47.173 [2024-05-15 00:38:13.158765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:47.173 [2024-05-15 00:38:13.158786] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.173 [2024-05-15 00:38:13.168259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:47.173 [2024-05-15 00:38:13.168489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.173 [2024-05-15 00:38:13.168725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.173 [2024-05-15 00:38:13.168751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be3900 with addr=10.0.0.2, port=4420 00:22:47.173 [2024-05-15 00:38:13.168767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be3900 is same with the state(5) to be set 00:22:47.173 [2024-05-15 00:38:13.168789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be3900 (9): Bad file descriptor 00:22:47.173 [2024-05-15 00:38:13.168810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:47.173 [2024-05-15 00:38:13.168824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:47.173 [2024-05-15 00:38:13.168837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:47.173 [2024-05-15 00:38:13.168856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.173 [2024-05-15 00:38:13.178339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:47.173 [2024-05-15 00:38:13.178610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.173 [2024-05-15 00:38:13.178806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.173 [2024-05-15 00:38:13.178832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be3900 with addr=10.0.0.2, port=4420 00:22:47.173 [2024-05-15 00:38:13.178848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be3900 is same with the state(5) to be set 00:22:47.173 [2024-05-15 00:38:13.178871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be3900 (9): Bad file descriptor 00:22:47.173 [2024-05-15 00:38:13.178910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:47.173 [2024-05-15 00:38:13.178926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:47.173 [2024-05-15 00:38:13.178970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:47.173 [2024-05-15 00:38:13.179022] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.173 [2024-05-15 00:38:13.183969] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:47.173 [2024-05-15 00:38:13.184014] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4421 == \4\4\2\1 ]] 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:47.173 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.430 00:38:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.363 [2024-05-15 00:38:14.445067] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:48.363 [2024-05-15 00:38:14.445088] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:48.363 [2024-05-15 00:38:14.445108] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:48.622 [2024-05-15 00:38:14.531417] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:48.622 [2024-05-15 00:38:14.639633] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:48.622 [2024-05-15 00:38:14.639671] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.622 request: 00:22:48.622 { 00:22:48.622 "name": "nvme", 00:22:48.622 "trtype": "tcp", 00:22:48.622 "traddr": "10.0.0.2", 00:22:48.622 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:48.622 "adrfam": "ipv4", 00:22:48.622 "trsvcid": "8009", 00:22:48.622 "wait_for_attach": true, 00:22:48.622 "method": "bdev_nvme_start_discovery", 00:22:48.622 "req_id": 1 00:22:48.622 } 00:22:48.622 Got JSON-RPC error response 00:22:48.622 response: 00:22:48.622 { 00:22:48.622 "code": -17, 00:22:48.622 "message": "File exists" 00:22:48.622 } 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.622 request: 00:22:48.622 { 00:22:48.622 "name": "nvme_second", 00:22:48.622 "trtype": "tcp", 00:22:48.622 "traddr": "10.0.0.2", 00:22:48.622 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:48.622 "adrfam": "ipv4", 00:22:48.622 "trsvcid": "8009", 00:22:48.622 "wait_for_attach": true, 00:22:48.622 "method": "bdev_nvme_start_discovery", 00:22:48.622 "req_id": 1 00:22:48.622 } 00:22:48.622 Got JSON-RPC error response 00:22:48.622 response: 00:22:48.622 { 00:22:48.622 "code": -17, 00:22:48.622 "message": "File exists" 00:22:48.622 } 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:48.622 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:48.623 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.880 00:38:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.813 [2024-05-15 00:38:15.844086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.813 [2024-05-15 00:38:15.844338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.813 [2024-05-15 00:38:15.844377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c15ad0 with addr=10.0.0.2, port=8010 00:22:49.813 [2024-05-15 00:38:15.844407] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:49.813 [2024-05-15 00:38:15.844424] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:49.813 [2024-05-15 00:38:15.844453] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:50.747 [2024-05-15 00:38:16.846586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.747 [2024-05-15 00:38:16.846868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.747 [2024-05-15 00:38:16.846899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1710 with addr=10.0.0.2, port=8010 00:22:50.747 [2024-05-15 00:38:16.846944] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:50.747 [2024-05-15 00:38:16.846964] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:50.747 [2024-05-15 00:38:16.846979] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:52.120 [2024-05-15 00:38:17.848687] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:52.120 request: 00:22:52.120 { 00:22:52.120 "name": "nvme_second", 00:22:52.120 "trtype": "tcp", 00:22:52.120 "traddr": "10.0.0.2", 00:22:52.120 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:52.120 "adrfam": "ipv4", 00:22:52.120 "trsvcid": "8010", 00:22:52.120 "attach_timeout_ms": 3000, 00:22:52.120 "method": "bdev_nvme_start_discovery", 00:22:52.120 "req_id": 1 00:22:52.120 } 00:22:52.120 Got JSON-RPC error response 00:22:52.120 response: 00:22:52.120 { 00:22:52.120 "code": -110, 00:22:52.120 "message": "Connection timed out" 00:22:52.120 } 00:22:52.120 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:22:52.120 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:22:52.120 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:52.120 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:52.120 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:52.120 00:38:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:52.120 00:38:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:52.120 00:38:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:52.120 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.120 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 954242 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.121 rmmod nvme_tcp 00:22:52.121 rmmod nvme_fabrics 00:22:52.121 rmmod nvme_keyring 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 954094 ']' 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 954094 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' -z 954094 ']' 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # kill -0 954094 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # uname 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:52.121 00:38:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 954094 00:22:52.121 00:38:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:52.121 00:38:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:52.121 00:38:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 954094' 00:22:52.121 killing process with pid 954094 00:22:52.121 00:38:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # kill 954094 00:22:52.121 [2024-05-15 00:38:18.008671] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:52.121 00:38:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@971 -- # wait 954094 00:22:52.380 00:38:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.380 00:38:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.380 00:38:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.380 00:38:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.380 00:38:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.380 00:38:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.380 00:38:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.380 00:38:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.285 00:38:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.285 00:22:54.285 real 0m14.277s 00:22:54.285 user 0m20.056s 00:22:54.285 sys 0m3.130s 00:22:54.285 00:38:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:54.285 00:38:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.285 ************************************ 00:22:54.285 END TEST nvmf_host_discovery 00:22:54.285 ************************************ 00:22:54.285 00:38:20 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:54.285 00:38:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:54.285 00:38:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:54.285 00:38:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.285 ************************************ 00:22:54.285 START TEST nvmf_host_multipath_status 00:22:54.285 ************************************ 00:22:54.285 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:54.544 * Looking for test storage... 00:22:54.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.544 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.545 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.545 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.545 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.545 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.545 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.545 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.545 00:38:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.076 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:57.077 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:57.077 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:57.077 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:57.077 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:57.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:22:57.077 00:22:57.077 --- 10.0.0.2 ping statistics --- 00:22:57.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.077 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:22:57.077 00:22:57.077 --- 10.0.0.1 ping statistics --- 00:22:57.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.077 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.077 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:57.078 00:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=957670 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 957670 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 957670 ']' 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:57.078 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:57.078 [2024-05-15 00:38:23.054279] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:57.078 [2024-05-15 00:38:23.054355] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.078 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.078 [2024-05-15 00:38:23.136006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:57.337 [2024-05-15 00:38:23.256954] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.337 [2024-05-15 00:38:23.257020] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.337 [2024-05-15 00:38:23.257036] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.337 [2024-05-15 00:38:23.257049] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.337 [2024-05-15 00:38:23.257061] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.337 [2024-05-15 00:38:23.258956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.337 [2024-05-15 00:38:23.258983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.337 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:57.337 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:22:57.337 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:57.337 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:57.337 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:57.337 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.337 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=957670 00:22:57.337 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:57.595 [2024-05-15 00:38:23.610489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.595 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:57.852 Malloc0 00:22:57.852 00:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:58.110 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:58.367 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.625 [2024-05-15 00:38:24.620120] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:58.625 [2024-05-15 00:38:24.620417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.625 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:58.882 [2024-05-15 00:38:24.861017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:58.882 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=957848 00:22:58.882 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:58.882 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 957848 /var/tmp/bdevperf.sock 00:22:58.882 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:58.882 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 957848 ']' 00:22:58.882 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.882 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:58.882 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.882 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:58.882 00:38:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:59.815 00:38:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:59.815 00:38:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:22:59.815 00:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:00.072 00:38:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:00.637 Nvme0n1 00:23:00.637 00:38:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:00.894 Nvme0n1 00:23:00.894 00:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:00.894 00:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:03.469 00:38:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:03.469 00:38:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:03.469 00:38:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:03.469 00:38:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:04.403 00:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:04.403 00:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:04.403 00:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.403 00:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:04.660 00:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.660 00:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:04.660 00:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.660 00:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.918 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.918 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.919 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.919 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:05.177 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.177 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:05.177 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.177 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:05.434 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.434 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:05.434 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.434 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:05.692 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.692 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:05.692 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.692 00:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.950 00:38:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.950 00:38:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:05.950 00:38:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:06.208 00:38:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:06.466 00:38:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:07.399 00:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:07.399 00:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:07.399 00:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.399 00:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.657 00:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.657 00:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:07.657 00:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.657 00:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.914 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.914 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.914 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.914 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:08.172 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.172 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:08.172 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.172 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:08.430 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.430 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:08.430 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.430 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.687 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.687 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:08.687 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.687 00:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.944 00:38:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.944 00:38:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:08.944 00:38:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:09.201 00:38:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:09.458 00:38:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:10.388 00:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:10.388 00:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:10.388 00:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.388 00:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.646 00:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.646 00:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:10.646 00:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.646 00:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.903 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.903 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.903 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.903 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:11.162 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.162 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:11.162 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.162 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:11.420 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.420 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:11.420 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.420 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.677 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.678 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:11.678 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.678 00:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:11.936 00:38:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.936 00:38:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:11.936 00:38:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:12.194 00:38:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:12.451 00:38:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:13.383 00:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:13.383 00:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:13.383 00:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.384 00:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:13.640 00:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.640 00:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:13.640 00:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.640 00:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:13.898 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:13.898 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:13.898 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.898 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.155 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.155 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.155 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.156 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.413 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.413 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:14.413 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.413 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:14.670 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.670 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:14.670 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.670 00:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:14.929 00:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.929 00:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:14.929 00:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:15.187 00:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:15.444 00:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:16.408 00:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:16.408 00:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:16.408 00:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.408 00:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:16.665 00:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.665 00:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:16.665 00:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.665 00:38:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:16.922 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.922 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:16.922 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.922 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:17.179 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.179 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:17.179 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.179 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.435 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.435 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:17.435 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.435 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:17.692 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:17.692 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:17.692 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.692 00:38:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:17.949 00:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:17.949 00:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:17.949 00:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:18.206 00:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:18.463 00:38:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:19.394 00:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:19.394 00:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:19.394 00:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.394 00:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:19.651 00:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:19.651 00:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:19.651 00:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.651 00:38:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:19.909 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.909 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:19.909 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.909 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:20.166 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.166 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:20.166 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.166 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:20.424 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.424 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:20.424 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.424 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.682 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.682 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:20.682 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.682 00:38:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:20.939 00:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.939 00:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:21.197 00:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:21.197 00:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:21.455 00:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:21.712 00:38:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:22.645 00:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:22.645 00:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:22.645 00:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.645 00:38:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:22.903 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.903 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:23.160 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.160 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:23.160 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.160 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:23.160 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.160 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:23.419 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.419 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:23.419 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.419 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:23.675 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.675 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:23.675 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.675 00:38:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:23.933 00:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.933 00:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:23.933 00:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.933 00:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:24.190 00:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.190 00:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:24.190 00:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:24.756 00:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:24.756 00:38:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:26.127 00:38:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:26.127 00:38:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:26.127 00:38:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.127 00:38:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:26.127 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:26.127 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:26.127 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.127 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:26.384 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.384 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:26.384 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.384 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:26.641 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.641 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:26.641 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.641 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:26.899 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.899 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:26.899 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.899 00:38:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:27.156 00:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.156 00:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:27.156 00:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.156 00:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:27.427 00:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.427 00:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:27.427 00:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:27.690 00:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:27.947 00:38:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:28.879 00:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:28.879 00:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:28.879 00:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.879 00:38:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:29.138 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.138 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:29.138 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.138 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:29.433 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.433 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:29.433 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.433 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:29.690 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.690 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:29.690 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.690 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:29.948 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.948 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:29.948 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.948 00:38:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:30.206 00:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.206 00:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:30.206 00:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.206 00:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:30.463 00:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.463 00:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:30.463 00:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:30.721 00:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:30.978 00:38:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:31.910 00:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:31.910 00:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:31.911 00:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.911 00:38:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:32.168 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.168 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:32.168 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.168 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:32.426 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:32.426 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:32.426 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.426 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:32.684 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.684 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:32.684 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.684 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:32.941 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.941 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:32.941 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.941 00:38:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:33.198 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.198 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:33.198 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.198 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 957848 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 957848 ']' 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 957848 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 957848 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 957848' 00:23:33.455 killing process with pid 957848 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 957848 00:23:33.455 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 957848 00:23:33.455 Connection closed with partial response: 00:23:33.456 00:23:33.456 00:23:33.732 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 957848 00:23:33.732 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:33.732 [2024-05-15 00:38:24.922179] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:33.732 [2024-05-15 00:38:24.922281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957848 ] 00:23:33.732 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.732 [2024-05-15 00:38:24.994819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.732 [2024-05-15 00:38:25.106255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.732 Running I/O for 90 seconds... 00:23:33.732 [2024-05-15 00:38:41.289393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.732 [2024-05-15 00:38:41.289466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.289502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.732 [2024-05-15 00:38:41.289520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.289542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.732 [2024-05-15 00:38:41.289558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.289580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.732 [2024-05-15 00:38:41.289596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.289618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.732 [2024-05-15 00:38:41.289633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.289654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.732 [2024-05-15 00:38:41.289670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.732 [2024-05-15 00:38:41.290117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.290806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.290823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.291420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.291445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.291474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.291492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.291514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.291531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.291553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.291570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.291592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.291608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.291632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.291648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.291671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.291687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.291710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.291726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.291748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.291779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.291802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.732 [2024-05-15 00:38:41.291818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:33.732 [2024-05-15 00:38:41.291840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.291856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.291878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.291893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.291920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.291961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.291985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.292962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.292984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.293032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.293071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.293109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.293148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.293186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.293225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.293264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.293303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.293341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.293380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:33.733 [2024-05-15 00:38:41.293423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.733 [2024-05-15 00:38:41.293440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.293967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.293985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.294006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.294023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.294744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.294767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.294794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.294812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.294836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.294853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.294875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.294892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.294914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.294939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.294964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.294982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.734 [2024-05-15 00:38:41.295783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.734 [2024-05-15 00:38:41.295805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.295821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.295842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.295858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.295879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.295895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.295916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.295957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.295983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.735 [2024-05-15 00:38:41.296456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.735 [2024-05-15 00:38:41.296510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.735 [2024-05-15 00:38:41.296550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.735 [2024-05-15 00:38:41.296589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.735 [2024-05-15 00:38:41.296628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.735 [2024-05-15 00:38:41.296672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.735 [2024-05-15 00:38:41.296710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.296981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.296998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.297020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.297037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.297059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.297091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.297113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.297129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.297152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.297172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.297195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.297211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.297249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.297265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.297286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.297301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.297322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.297339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.298088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.298113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.298141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.735 [2024-05-15 00:38:41.298159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:33.735 [2024-05-15 00:38:41.298181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.298923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.298986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.736 [2024-05-15 00:38:41.299588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:33.736 [2024-05-15 00:38:41.299610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.299628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.299651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.299667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.299689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.299706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.299728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.299745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.299767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.299783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.299805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.299822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.299844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.299861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.299889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.299906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.299935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.299958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.299982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.299999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.300021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.300036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.300059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.300075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.300097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.300113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.300136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.300153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.313892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.313908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.314767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.314792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.314820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.314838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.314862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.314879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.314902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.314918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.314954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.314978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.315006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.315024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.315047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.315063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.315086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.315102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.315125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.315142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.315164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.315180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.315203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.315219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.737 [2024-05-15 00:38:41.315243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.737 [2024-05-15 00:38:41.315260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.315962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.315982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.738 [2024-05-15 00:38:41.316508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.738 [2024-05-15 00:38:41.316561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.738 [2024-05-15 00:38:41.316602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.738 [2024-05-15 00:38:41.316641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.738 [2024-05-15 00:38:41.316680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.738 [2024-05-15 00:38:41.316719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.738 [2024-05-15 00:38:41.316759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.316962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.316986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.317003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.317032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.317050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.317073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.317090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.317112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.317129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.317167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.317184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.317206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.317237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.317259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.317275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.317296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.317311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.317332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.317346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.317367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.317383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.318075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.318099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.318126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.738 [2024-05-15 00:38:41.318145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:33.738 [2024-05-15 00:38:41.318183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.739 [2024-05-15 00:38:41.318842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:33.739 [2024-05-15 00:38:41.318863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.318879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.318900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.318916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.318947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.318966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.318996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.319959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.319978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.320001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.320017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.320040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.320056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.320078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.320095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.320117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.320133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.320156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.320172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.320195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.320228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.320251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.320267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:33.740 [2024-05-15 00:38:41.320308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.740 [2024-05-15 00:38:41.320325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.320361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.320377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.320398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.320413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.320434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.320449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.320469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.320485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.320505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.320520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.320540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.320555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.320576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.320591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.320612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.320627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.320648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.320663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.320683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.320698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.320719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.320735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.321461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.321489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.321518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.321536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.321575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.321592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.321630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.321647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.321670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.321687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:33.741 [2024-05-15 00:38:41.321710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.741 [2024-05-15 00:38:41.321727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.321749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.321766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.321789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.321806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.321828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.321845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.321867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.321884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.321907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.321924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.321960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.321979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.322961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.322980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.323005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.323022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.323043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.323059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.323080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.323096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.323133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.323149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.323186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.323203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.323226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.742 [2024-05-15 00:38:41.323243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.323265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.742 [2024-05-15 00:38:41.323281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:33.742 [2024-05-15 00:38:41.323304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.742 [2024-05-15 00:38:41.323321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.743 [2024-05-15 00:38:41.323360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.743 [2024-05-15 00:38:41.323399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.743 [2024-05-15 00:38:41.323439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.743 [2024-05-15 00:38:41.323492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.743 [2024-05-15 00:38:41.323535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.323591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.323629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.323669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.323707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.323746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.323801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.323853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.323891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.323952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.323990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.324006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.324028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.324043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.324064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.324083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.324105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.324121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.324845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.324868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.324895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.324913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.324962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.324982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:33.743 [2024-05-15 00:38:41.325757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.743 [2024-05-15 00:38:41.325771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.325792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.325806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.325831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.325847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.325867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.325882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.325902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.325917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.325962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.325980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.326975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.326997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.327021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.327053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.327076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.327106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.327129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.327144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.327165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.327181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.327216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.327232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.327253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.327268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.327289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.327304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.327324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.327339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.327365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.327381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:33.744 [2024-05-15 00:38:41.327402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.744 [2024-05-15 00:38:41.327432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.327454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.327470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.327492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.327507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.328948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.328980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.745 [2024-05-15 00:38:41.329688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.745 [2024-05-15 00:38:41.329703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.329723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.329739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.329760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.329775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.329796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.329811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.329832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.329848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.329869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.329885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.329920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.329955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.746 [2024-05-15 00:38:41.330141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.746 [2024-05-15 00:38:41.330191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.746 [2024-05-15 00:38:41.330230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.746 [2024-05-15 00:38:41.330269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.746 [2024-05-15 00:38:41.330324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.746 [2024-05-15 00:38:41.330378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.746 [2024-05-15 00:38:41.330417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.330891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.330905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.331595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.331619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.331647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.331665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.331688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.331704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.331727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.331744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.331765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.331782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.331804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.331820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.331842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.331864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.331888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.331904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.331952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.746 [2024-05-15 00:38:41.331970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:33.746 [2024-05-15 00:38:41.331992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.332941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.332990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.333007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.333030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.333047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:33.747 [2024-05-15 00:38:41.340520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.747 [2024-05-15 00:38:41.340536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.340571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.340607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.340643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.340678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.340714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.340750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.340789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.340826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.340862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.340898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.340959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.340985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.341955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.341984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.748 [2024-05-15 00:38:41.342560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:33.748 [2024-05-15 00:38:41.342584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.342600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.342624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.342639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.342662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.342678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.342706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.342722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.342746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.342762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.342786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.342802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.342826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.342841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.342865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.342881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.342905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.342921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.342969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.342986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.749 [2024-05-15 00:38:41.343491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.749 [2024-05-15 00:38:41.343531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.749 [2024-05-15 00:38:41.343571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.749 [2024-05-15 00:38:41.343609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.749 [2024-05-15 00:38:41.343649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.749 [2024-05-15 00:38:41.343688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.749 [2024-05-15 00:38:41.343727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.343980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.343996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.344023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.344039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.344063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.344079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.344104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.344120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.344145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.344160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.344185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.344201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:41.344366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:41.344386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:56.900122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:56.900200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.749 [2024-05-15 00:38:56.900262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.749 [2024-05-15 00:38:56.900303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.900341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.750 [2024-05-15 00:38:56.900368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.900403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.750 [2024-05-15 00:38:56.900431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.900818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.900851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.900889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.900917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.900966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.900994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.750 [2024-05-15 00:38:56.901700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.901956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.901985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.902763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.902800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.750 [2024-05-15 00:38:56.902827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.905476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.905526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:33.750 [2024-05-15 00:38:56.905567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.750 [2024-05-15 00:38:56.905603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.905637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.905668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.905702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.905727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.905759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.905784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.905819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.905845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.905879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.905903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.905962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.905988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.906047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.906107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.906164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.906223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.906287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.906348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.906421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.906501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.906563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.906626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.906689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.906753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.906815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.906874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.906938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.906973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.906998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.907032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.907060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.907095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.907123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.751 [2024-05-15 00:38:56.908874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.908945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.908984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.909016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.909052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.909079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:33.751 [2024-05-15 00:38:56.909115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.751 [2024-05-15 00:38:56.909141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.909202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.909260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.909323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.909399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.909475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.909536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.909602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.909665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.909730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.909802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.909868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.909943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.909981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.910071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.910196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.752 [2024-05-15 00:38:56.910640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.910914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.910965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.911004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.911031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.911066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.911092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.911125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.911151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.912031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.912069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.912113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.912142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.912180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.912208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.912253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.912280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.912331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.912358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.912407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.912434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.912473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.912500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.912537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.912562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.752 [2024-05-15 00:38:56.912595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.752 [2024-05-15 00:38:56.912622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.912657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.912684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.912715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.912738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.912771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.912797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.912845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.912872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.912921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.912957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.912993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.913019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.913055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.913092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.913133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.913160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.915463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.915499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.915535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.915558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.915598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.915620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.915657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.915678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.915708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.915730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.915760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.915783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.915813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.915836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.915865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.915887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.915917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.915947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.915979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.916001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.916062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.916122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.916179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.916253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.916306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.916355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.916402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.916449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.916497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.916546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.916592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.753 [2024-05-15 00:38:56.916641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.916697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.916759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.916813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.916869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.916951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.916985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.917007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.917034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.917055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.917082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.917102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.917129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.917150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.917178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.753 [2024-05-15 00:38:56.917199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:33.753 [2024-05-15 00:38:56.917241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.917260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.917287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.917307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.917333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.917353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.917387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.917408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.917435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.917455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.920147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.920196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.920236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.920276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.920332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.920371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.920411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.920464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.920518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.920558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.920602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.920659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.920699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.920740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.920780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.920819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.920869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.920908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.920960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.920994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.921011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.921034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.921051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.921074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.921091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.921114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.921135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.921159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.921176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.921199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.921215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.921242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.921259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.921282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.754 [2024-05-15 00:38:56.921298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.921320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.921337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.921360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.921377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.921414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.754 [2024-05-15 00:38:56.921430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:33.754 [2024-05-15 00:38:56.921451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.921468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.921504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.921520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.921555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.921572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.921595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.921611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.921633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.921649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.921690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.921708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.921732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.921749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.922498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.922524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.922552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.922571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.922593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.922610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.922632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.922664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.922686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.922702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.922723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.922739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.922775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.922792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.922813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.922828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.922850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.922865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.922886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.922902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.922967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.922986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.923026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.923065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.923104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.923145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.923184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.923231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.923271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.923309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.923349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.923388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.923426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.923470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.923525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.923548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.923564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.924586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.924610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.924635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.924652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.924674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.924706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.924729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.924760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.924784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.924800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.924823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.755 [2024-05-15 00:38:56.924839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.924862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.924878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.755 [2024-05-15 00:38:56.924900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.755 [2024-05-15 00:38:56.924916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.924950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.924977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.924999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.925021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.925044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.925061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.925083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.925100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.925122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.925139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.925162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.925178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.925200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.925216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.925239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.925255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.925277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.925293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.925316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.925332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.925354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.925370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.925394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.925411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.926051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.926096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.926142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.926181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.926231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.926284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.926321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.926357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.926394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.926430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.926466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.926502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.926539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.926592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.926651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.926691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.926714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.926730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.928273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.928298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.928327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.928345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.928367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.928384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.928407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.928425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.928448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.928479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.928502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.928518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.928540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.928556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.928577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.928593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.928614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.928644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.928666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.756 [2024-05-15 00:38:56.928686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:33.756 [2024-05-15 00:38:56.928709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.756 [2024-05-15 00:38:56.928725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.928746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.928761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.928783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.928798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.928819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.928834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.928855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.928870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.928891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.928906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.928956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.928975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.928997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.929014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.929052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.929093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.929132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.929175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.929232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.929294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.929331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.929368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.929404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.929441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.929477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.929513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.929534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.929550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.932698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.932738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.932783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.932824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.757 [2024-05-15 00:38:56.932903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.757 [2024-05-15 00:38:56.932951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:33.757 [2024-05-15 00:38:56.932981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.932998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.933038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.933077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.933117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.933156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.933196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.933234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.933289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.933328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.933368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.933406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.933446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.933485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.933508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.933526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.935324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.935876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.935928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.935960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.935992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.936016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.936033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.936055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.758 [2024-05-15 00:38:56.936078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.936103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.936121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.936144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.936161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.936183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.936200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.936237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.936254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.936276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.758 [2024-05-15 00:38:56.936308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:33.758 [2024-05-15 00:38:56.936329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.936345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.936367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.936382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.936404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.936419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.936440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.936456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.936477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.936493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.936514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.936530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.936552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.936567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.936593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.936609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.936631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.936646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.936668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.936683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.936705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.936720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.936742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.936758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.939074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.939583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.939636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.939673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.939726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.939764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.939801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.939947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.939983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.940000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.941174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.941199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.941249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.941267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.941290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.759 [2024-05-15 00:38:56.941322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.941346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.941363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.759 [2024-05-15 00:38:56.941386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.759 [2024-05-15 00:38:56.941402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.941442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.941481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.941521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.941560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.941605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.941646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.941685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.941725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.941765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.941805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.941845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.941885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.941925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.941971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.941989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.942686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.942711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.942738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.942757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.942780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.942811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.942840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.942857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.942880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.942896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.942943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.942963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.942987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.943005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.943044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.943083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.943123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.943162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.943201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.943240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.943279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.943318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.943363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.943402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.943442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.943480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.943519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.943559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.943597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.943637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.943676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.760 [2024-05-15 00:38:56.943715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.760 [2024-05-15 00:38:56.943754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:33.760 [2024-05-15 00:38:56.943776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.943793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.945263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.945305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.945344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.945383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.945621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.945706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.945746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.945910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.945960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.945983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.946000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.946039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.946079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.946118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.946157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.946203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.946256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.946294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.946331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.946368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.946404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.946441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.946478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.946515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.946568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.946607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.946629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.946645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.947430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.761 [2024-05-15 00:38:56.947455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.947483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.761 [2024-05-15 00:38:56.947501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:33.761 [2024-05-15 00:38:56.949190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.949215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.949261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.949302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.949342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.949382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.949422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.949461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.949500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.949539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.949578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.949622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.949662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.949701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.949741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.949780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.949819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.949857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.949897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.949945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.949970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.949988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.950011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.950027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.950049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.950066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.950088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.950108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.950132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.950148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.950171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.950188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.950210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.950226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.950249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.950265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.950288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.950319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.950342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.950374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.950398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.950415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.951124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.951149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.951193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.951214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.951239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.951256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.951279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.951295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.951318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.951350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.951379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.951396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.951434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.762 [2024-05-15 00:38:56.951450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.951471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.951487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.951507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.951538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.951562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.951580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:33.762 [2024-05-15 00:38:56.951602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.762 [2024-05-15 00:38:56.951619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.951641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.763 [2024-05-15 00:38:56.951657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.951680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.763 [2024-05-15 00:38:56.951707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.951731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.763 [2024-05-15 00:38:56.951747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.952892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.763 [2024-05-15 00:38:56.952917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.952953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.763 [2024-05-15 00:38:56.952972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.952995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.763 [2024-05-15 00:38:56.953018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.953047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.763 [2024-05-15 00:38:56.953065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.953088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.763 [2024-05-15 00:38:56.953104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.953127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.763 [2024-05-15 00:38:56.953144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.953166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.763 [2024-05-15 00:38:56.953183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.953220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.763 [2024-05-15 00:38:56.953237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.953260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.763 [2024-05-15 00:38:56.953292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:33.763 [2024-05-15 00:38:56.953314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.763 [2024-05-15 00:38:56.953330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:33.763 Received shutdown signal, test time was about 32.260117 seconds 00:23:33.763 00:23:33.763 Latency(us) 00:23:33.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.763 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:33.763 Verification LBA range: start 0x0 length 0x4000 00:23:33.763 Nvme0n1 : 32.26 7925.70 30.96 0.00 0.00 16123.59 530.96 4076242.11 00:23:33.763 =================================================================================================================== 00:23:33.763 Total : 7925.70 30.96 0.00 0.00 16123.59 530.96 4076242.11 00:23:33.763 00:38:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:34.021 rmmod nvme_tcp 00:23:34.021 rmmod nvme_fabrics 00:23:34.021 rmmod nvme_keyring 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 957670 ']' 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 957670 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 957670 ']' 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 957670 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 957670 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 957670' 00:23:34.021 killing process with pid 957670 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 957670 00:23:34.021 [2024-05-15 00:39:00.112427] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:34.021 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 957670 00:23:34.279 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:34.279 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:34.279 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:34.279 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.279 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:34.279 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.279 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.279 00:39:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.810 00:39:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:36.810 00:23:36.810 real 0m42.048s 00:23:36.810 user 2m4.498s 00:23:36.810 sys 0m11.475s 00:23:36.810 00:39:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:36.810 00:39:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:36.810 ************************************ 00:23:36.810 END TEST nvmf_host_multipath_status 00:23:36.810 ************************************ 00:23:36.810 00:39:02 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:36.810 00:39:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:36.810 00:39:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:36.810 00:39:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:36.810 ************************************ 00:23:36.810 START TEST nvmf_discovery_remove_ifc 00:23:36.810 ************************************ 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:36.810 * Looking for test storage... 00:23:36.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:36.810 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:36.811 00:39:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.338 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:39.339 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:39.339 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:39.339 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:39.339 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.339 00:39:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:23:39.339 00:23:39.339 --- 10.0.0.2 ping statistics --- 00:23:39.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.339 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:23:39.339 00:23:39.339 --- 10.0.0.1 ping statistics --- 00:23:39.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.339 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=964586 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 964586 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 964586 ']' 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:39.339 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.340 [2024-05-15 00:39:05.116692] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:39.340 [2024-05-15 00:39:05.116771] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.340 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.340 [2024-05-15 00:39:05.196871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.340 [2024-05-15 00:39:05.308346] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.340 [2024-05-15 00:39:05.308405] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.340 [2024-05-15 00:39:05.308418] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.340 [2024-05-15 00:39:05.308429] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.340 [2024-05-15 00:39:05.308439] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.340 [2024-05-15 00:39:05.308466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.340 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:39.340 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:23:39.340 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.340 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:39.340 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.340 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.340 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:39.340 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.340 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.340 [2024-05-15 00:39:05.464079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.340 [2024-05-15 00:39:05.472053] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:39.340 [2024-05-15 00:39:05.472331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:39.340 null0 00:23:39.598 [2024-05-15 00:39:05.504199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.598 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.598 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=964613 00:23:39.598 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:39.598 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 964613 /tmp/host.sock 00:23:39.598 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 964613 ']' 00:23:39.598 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:23:39.598 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:39.598 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:39.598 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:39.598 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:39.598 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.598 [2024-05-15 00:39:05.566250] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:39.598 [2024-05-15 00:39:05.566334] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid964613 ] 00:23:39.598 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.598 [2024-05-15 00:39:05.634882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.598 [2024-05-15 00:39:05.739727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.856 00:39:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.226 [2024-05-15 00:39:06.994137] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:41.226 [2024-05-15 00:39:06.994169] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:41.226 [2024-05-15 00:39:06.994194] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.226 [2024-05-15 00:39:07.081532] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:41.226 [2024-05-15 00:39:07.222625] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:41.226 [2024-05-15 00:39:07.222697] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:41.226 [2024-05-15 00:39:07.222744] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:41.226 [2024-05-15 00:39:07.222769] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:41.226 [2024-05-15 00:39:07.222799] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:41.226 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.226 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:41.226 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:41.226 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.226 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:41.226 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.226 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.226 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:41.226 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:41.227 [2024-05-15 00:39:07.230899] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22ad0d0 was disconnected and freed. delete nvme_qpair. 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:41.227 00:39:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:42.599 00:39:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:42.599 00:39:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.599 00:39:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.599 00:39:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.599 00:39:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:42.599 00:39:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:42.599 00:39:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:42.599 00:39:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.599 00:39:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:42.599 00:39:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:43.531 00:39:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:43.531 00:39:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.531 00:39:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.531 00:39:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:43.531 00:39:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:43.531 00:39:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:43.531 00:39:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:43.531 00:39:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.531 00:39:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:43.531 00:39:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:44.462 00:39:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.462 00:39:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.462 00:39:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.462 00:39:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.462 00:39:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.462 00:39:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.462 00:39:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.462 00:39:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.462 00:39:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:44.462 00:39:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:45.393 00:39:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:45.393 00:39:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.393 00:39:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.393 00:39:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:45.393 00:39:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.393 00:39:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:45.393 00:39:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:45.393 00:39:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.393 00:39:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:45.393 00:39:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.765 00:39:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.765 00:39:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.765 00:39:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.765 00:39:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.765 00:39:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.765 00:39:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.765 00:39:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.765 00:39:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.765 00:39:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:46.765 00:39:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.765 [2024-05-15 00:39:12.663627] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:46.765 [2024-05-15 00:39:12.663702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.765 [2024-05-15 00:39:12.663739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.765 [2024-05-15 00:39:12.663760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.765 [2024-05-15 00:39:12.663776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.765 [2024-05-15 00:39:12.663792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.765 [2024-05-15 00:39:12.663807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.765 [2024-05-15 00:39:12.663823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.765 [2024-05-15 00:39:12.663838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.765 [2024-05-15 00:39:12.663855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.765 [2024-05-15 00:39:12.663870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.765 [2024-05-15 00:39:12.663887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2274440 is same with the state(5) to be set 00:23:46.765 [2024-05-15 00:39:12.673644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2274440 (9): Bad file descriptor 00:23:46.765 [2024-05-15 00:39:12.683694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:47.696 00:39:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.696 00:39:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.696 00:39:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.696 00:39:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.696 00:39:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.696 00:39:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.696 00:39:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.696 [2024-05-15 00:39:13.740969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:48.627 [2024-05-15 00:39:14.764972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:48.627 [2024-05-15 00:39:14.765033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2274440 with addr=10.0.0.2, port=4420 00:23:48.627 [2024-05-15 00:39:14.765064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2274440 is same with the state(5) to be set 00:23:48.627 [2024-05-15 00:39:14.765574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2274440 (9): Bad file descriptor 00:23:48.627 [2024-05-15 00:39:14.765623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.627 [2024-05-15 00:39:14.765664] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:48.627 [2024-05-15 00:39:14.765709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.627 [2024-05-15 00:39:14.765734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.627 [2024-05-15 00:39:14.765756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.627 [2024-05-15 00:39:14.765771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.627 [2024-05-15 00:39:14.765787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.627 [2024-05-15 00:39:14.765802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.627 [2024-05-15 00:39:14.765817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.627 [2024-05-15 00:39:14.765831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.627 [2024-05-15 00:39:14.765846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.627 [2024-05-15 00:39:14.765861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.627 [2024-05-15 00:39:14.765877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:48.627 [2024-05-15 00:39:14.766109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22738d0 (9): Bad file descriptor 00:23:48.627 [2024-05-15 00:39:14.767131] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:48.627 [2024-05-15 00:39:14.767151] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:48.627 00:39:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.627 00:39:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:48.627 00:39:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:50.005 00:39:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:50.935 [2024-05-15 00:39:16.820218] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:50.935 [2024-05-15 00:39:16.820252] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:50.935 [2024-05-15 00:39:16.820293] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:50.935 [2024-05-15 00:39:16.906589] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:50.935 00:39:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.935 00:39:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.935 00:39:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.935 00:39:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.935 00:39:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.935 00:39:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.935 00:39:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.935 00:39:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.935 00:39:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:50.935 00:39:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:50.935 [2024-05-15 00:39:17.007891] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:50.935 [2024-05-15 00:39:17.007957] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:50.935 [2024-05-15 00:39:17.008010] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:50.935 [2024-05-15 00:39:17.008032] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:50.935 [2024-05-15 00:39:17.008044] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:50.935 [2024-05-15 00:39:17.016781] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22b76e0 was disconnected and freed. delete nvme_qpair. 00:23:51.866 00:39:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:51.866 00:39:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.866 00:39:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:51.866 00:39:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.866 00:39:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.866 00:39:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:51.866 00:39:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:51.866 00:39:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.866 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:51.866 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:51.866 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 964613 00:23:51.866 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 964613 ']' 00:23:51.866 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 964613 00:23:51.866 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:23:51.866 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:51.866 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 964613 00:23:52.124 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:52.124 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:52.124 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 964613' 00:23:52.124 killing process with pid 964613 00:23:52.124 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 964613 00:23:52.124 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 964613 00:23:52.381 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:52.381 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:52.381 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:52.381 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:52.381 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:52.381 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:52.381 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:52.381 rmmod nvme_tcp 00:23:52.381 rmmod nvme_fabrics 00:23:52.381 rmmod nvme_keyring 00:23:52.381 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:52.381 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 964586 ']' 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 964586 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 964586 ']' 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 964586 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 964586 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 964586' 00:23:52.382 killing process with pid 964586 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 964586 00:23:52.382 [2024-05-15 00:39:18.371440] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:52.382 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 964586 00:23:52.640 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:52.640 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:52.640 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:52.640 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:52.640 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:52.640 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.640 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.640 00:39:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.545 00:39:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:54.545 00:23:54.545 real 0m18.183s 00:23:54.545 user 0m24.946s 00:23:54.545 sys 0m3.246s 00:23:54.545 00:39:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:54.545 00:39:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.545 ************************************ 00:23:54.545 END TEST nvmf_discovery_remove_ifc 00:23:54.545 ************************************ 00:23:54.804 00:39:20 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:54.804 00:39:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:54.804 00:39:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:54.804 00:39:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:54.804 ************************************ 00:23:54.804 START TEST nvmf_identify_kernel_target 00:23:54.804 ************************************ 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:54.804 * Looking for test storage... 00:23:54.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:54.804 00:39:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.332 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:57.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:57.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:57.333 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:57.333 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:23:57.333 00:23:57.333 --- 10.0.0.2 ping statistics --- 00:23:57.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.333 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:23:57.333 00:23:57.333 --- 10.0.0.1 ping statistics --- 00:23:57.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.333 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:57.333 00:39:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:58.705 Waiting for block devices as requested 00:23:58.705 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:58.705 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:58.705 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:58.962 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:58.962 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:58.962 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:58.962 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:59.218 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:59.218 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:59.218 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:59.218 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:59.475 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:59.475 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:59.475 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:59.475 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:59.475 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:59.735 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:59.735 No valid GPT data, bailing 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:59.735 00:23:59.735 Discovery Log Number of Records 2, Generation counter 2 00:23:59.735 =====Discovery Log Entry 0====== 00:23:59.735 trtype: tcp 00:23:59.735 adrfam: ipv4 00:23:59.735 subtype: current discovery subsystem 00:23:59.735 treq: not specified, sq flow control disable supported 00:23:59.735 portid: 1 00:23:59.735 trsvcid: 4420 00:23:59.735 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:59.735 traddr: 10.0.0.1 00:23:59.735 eflags: none 00:23:59.735 sectype: none 00:23:59.735 =====Discovery Log Entry 1====== 00:23:59.735 trtype: tcp 00:23:59.735 adrfam: ipv4 00:23:59.735 subtype: nvme subsystem 00:23:59.735 treq: not specified, sq flow control disable supported 00:23:59.735 portid: 1 00:23:59.735 trsvcid: 4420 00:23:59.735 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:59.735 traddr: 10.0.0.1 00:23:59.735 eflags: none 00:23:59.735 sectype: none 00:23:59.735 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:59.735 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:59.735 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.028 ===================================================== 00:24:00.028 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:00.028 ===================================================== 00:24:00.028 Controller Capabilities/Features 00:24:00.028 ================================ 00:24:00.028 Vendor ID: 0000 00:24:00.028 Subsystem Vendor ID: 0000 00:24:00.028 Serial Number: 3feaf130f7c1bc520897 00:24:00.028 Model Number: Linux 00:24:00.028 Firmware Version: 6.7.0-68 00:24:00.028 Recommended Arb Burst: 0 00:24:00.028 IEEE OUI Identifier: 00 00 00 00:24:00.028 Multi-path I/O 00:24:00.028 May have multiple subsystem ports: No 00:24:00.029 May have multiple controllers: No 00:24:00.029 Associated with SR-IOV VF: No 00:24:00.029 Max Data Transfer Size: Unlimited 00:24:00.029 Max Number of Namespaces: 0 00:24:00.029 Max Number of I/O Queues: 1024 00:24:00.029 NVMe Specification Version (VS): 1.3 00:24:00.029 NVMe Specification Version (Identify): 1.3 00:24:00.029 Maximum Queue Entries: 1024 00:24:00.029 Contiguous Queues Required: No 00:24:00.029 Arbitration Mechanisms Supported 00:24:00.029 Weighted Round Robin: Not Supported 00:24:00.029 Vendor Specific: Not Supported 00:24:00.029 Reset Timeout: 7500 ms 00:24:00.029 Doorbell Stride: 4 bytes 00:24:00.029 NVM Subsystem Reset: Not Supported 00:24:00.029 Command Sets Supported 00:24:00.029 NVM Command Set: Supported 00:24:00.029 Boot Partition: Not Supported 00:24:00.029 Memory Page Size Minimum: 4096 bytes 00:24:00.029 Memory Page Size Maximum: 4096 bytes 00:24:00.029 Persistent Memory Region: Not Supported 00:24:00.029 Optional Asynchronous Events Supported 00:24:00.029 Namespace Attribute Notices: Not Supported 00:24:00.029 Firmware Activation Notices: Not Supported 00:24:00.029 ANA Change Notices: Not Supported 00:24:00.029 PLE Aggregate Log Change Notices: Not Supported 00:24:00.029 LBA Status Info Alert Notices: Not Supported 00:24:00.029 EGE Aggregate Log Change Notices: Not Supported 00:24:00.029 Normal NVM Subsystem Shutdown event: Not Supported 00:24:00.029 Zone Descriptor Change Notices: Not Supported 00:24:00.029 Discovery Log Change Notices: Supported 00:24:00.029 Controller Attributes 00:24:00.029 128-bit Host Identifier: Not Supported 00:24:00.029 Non-Operational Permissive Mode: Not Supported 00:24:00.029 NVM Sets: Not Supported 00:24:00.029 Read Recovery Levels: Not Supported 00:24:00.029 Endurance Groups: Not Supported 00:24:00.029 Predictable Latency Mode: Not Supported 00:24:00.029 Traffic Based Keep ALive: Not Supported 00:24:00.029 Namespace Granularity: Not Supported 00:24:00.029 SQ Associations: Not Supported 00:24:00.029 UUID List: Not Supported 00:24:00.029 Multi-Domain Subsystem: Not Supported 00:24:00.029 Fixed Capacity Management: Not Supported 00:24:00.029 Variable Capacity Management: Not Supported 00:24:00.029 Delete Endurance Group: Not Supported 00:24:00.029 Delete NVM Set: Not Supported 00:24:00.029 Extended LBA Formats Supported: Not Supported 00:24:00.029 Flexible Data Placement Supported: Not Supported 00:24:00.029 00:24:00.029 Controller Memory Buffer Support 00:24:00.029 ================================ 00:24:00.029 Supported: No 00:24:00.029 00:24:00.029 Persistent Memory Region Support 00:24:00.029 ================================ 00:24:00.029 Supported: No 00:24:00.029 00:24:00.029 Admin Command Set Attributes 00:24:00.029 ============================ 00:24:00.029 Security Send/Receive: Not Supported 00:24:00.029 Format NVM: Not Supported 00:24:00.029 Firmware Activate/Download: Not Supported 00:24:00.029 Namespace Management: Not Supported 00:24:00.029 Device Self-Test: Not Supported 00:24:00.029 Directives: Not Supported 00:24:00.029 NVMe-MI: Not Supported 00:24:00.029 Virtualization Management: Not Supported 00:24:00.029 Doorbell Buffer Config: Not Supported 00:24:00.029 Get LBA Status Capability: Not Supported 00:24:00.029 Command & Feature Lockdown Capability: Not Supported 00:24:00.029 Abort Command Limit: 1 00:24:00.029 Async Event Request Limit: 1 00:24:00.029 Number of Firmware Slots: N/A 00:24:00.029 Firmware Slot 1 Read-Only: N/A 00:24:00.029 Firmware Activation Without Reset: N/A 00:24:00.029 Multiple Update Detection Support: N/A 00:24:00.029 Firmware Update Granularity: No Information Provided 00:24:00.029 Per-Namespace SMART Log: No 00:24:00.029 Asymmetric Namespace Access Log Page: Not Supported 00:24:00.029 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:00.029 Command Effects Log Page: Not Supported 00:24:00.029 Get Log Page Extended Data: Supported 00:24:00.029 Telemetry Log Pages: Not Supported 00:24:00.029 Persistent Event Log Pages: Not Supported 00:24:00.029 Supported Log Pages Log Page: May Support 00:24:00.029 Commands Supported & Effects Log Page: Not Supported 00:24:00.029 Feature Identifiers & Effects Log Page:May Support 00:24:00.029 NVMe-MI Commands & Effects Log Page: May Support 00:24:00.029 Data Area 4 for Telemetry Log: Not Supported 00:24:00.029 Error Log Page Entries Supported: 1 00:24:00.029 Keep Alive: Not Supported 00:24:00.029 00:24:00.029 NVM Command Set Attributes 00:24:00.029 ========================== 00:24:00.029 Submission Queue Entry Size 00:24:00.029 Max: 1 00:24:00.029 Min: 1 00:24:00.029 Completion Queue Entry Size 00:24:00.029 Max: 1 00:24:00.029 Min: 1 00:24:00.029 Number of Namespaces: 0 00:24:00.029 Compare Command: Not Supported 00:24:00.029 Write Uncorrectable Command: Not Supported 00:24:00.029 Dataset Management Command: Not Supported 00:24:00.029 Write Zeroes Command: Not Supported 00:24:00.029 Set Features Save Field: Not Supported 00:24:00.029 Reservations: Not Supported 00:24:00.029 Timestamp: Not Supported 00:24:00.029 Copy: Not Supported 00:24:00.029 Volatile Write Cache: Not Present 00:24:00.029 Atomic Write Unit (Normal): 1 00:24:00.029 Atomic Write Unit (PFail): 1 00:24:00.029 Atomic Compare & Write Unit: 1 00:24:00.029 Fused Compare & Write: Not Supported 00:24:00.029 Scatter-Gather List 00:24:00.029 SGL Command Set: Supported 00:24:00.029 SGL Keyed: Not Supported 00:24:00.029 SGL Bit Bucket Descriptor: Not Supported 00:24:00.029 SGL Metadata Pointer: Not Supported 00:24:00.029 Oversized SGL: Not Supported 00:24:00.029 SGL Metadata Address: Not Supported 00:24:00.029 SGL Offset: Supported 00:24:00.029 Transport SGL Data Block: Not Supported 00:24:00.029 Replay Protected Memory Block: Not Supported 00:24:00.029 00:24:00.029 Firmware Slot Information 00:24:00.029 ========================= 00:24:00.029 Active slot: 0 00:24:00.029 00:24:00.029 00:24:00.029 Error Log 00:24:00.029 ========= 00:24:00.029 00:24:00.029 Active Namespaces 00:24:00.029 ================= 00:24:00.029 Discovery Log Page 00:24:00.029 ================== 00:24:00.029 Generation Counter: 2 00:24:00.029 Number of Records: 2 00:24:00.029 Record Format: 0 00:24:00.029 00:24:00.029 Discovery Log Entry 0 00:24:00.029 ---------------------- 00:24:00.029 Transport Type: 3 (TCP) 00:24:00.029 Address Family: 1 (IPv4) 00:24:00.029 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:00.029 Entry Flags: 00:24:00.029 Duplicate Returned Information: 0 00:24:00.029 Explicit Persistent Connection Support for Discovery: 0 00:24:00.029 Transport Requirements: 00:24:00.029 Secure Channel: Not Specified 00:24:00.029 Port ID: 1 (0x0001) 00:24:00.029 Controller ID: 65535 (0xffff) 00:24:00.029 Admin Max SQ Size: 32 00:24:00.029 Transport Service Identifier: 4420 00:24:00.029 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:00.029 Transport Address: 10.0.0.1 00:24:00.029 Discovery Log Entry 1 00:24:00.029 ---------------------- 00:24:00.029 Transport Type: 3 (TCP) 00:24:00.029 Address Family: 1 (IPv4) 00:24:00.029 Subsystem Type: 2 (NVM Subsystem) 00:24:00.029 Entry Flags: 00:24:00.029 Duplicate Returned Information: 0 00:24:00.029 Explicit Persistent Connection Support for Discovery: 0 00:24:00.029 Transport Requirements: 00:24:00.029 Secure Channel: Not Specified 00:24:00.029 Port ID: 1 (0x0001) 00:24:00.029 Controller ID: 65535 (0xffff) 00:24:00.029 Admin Max SQ Size: 32 00:24:00.029 Transport Service Identifier: 4420 00:24:00.029 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:00.029 Transport Address: 10.0.0.1 00:24:00.029 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:00.029 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.029 get_feature(0x01) failed 00:24:00.029 get_feature(0x02) failed 00:24:00.029 get_feature(0x04) failed 00:24:00.029 ===================================================== 00:24:00.029 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:00.029 ===================================================== 00:24:00.029 Controller Capabilities/Features 00:24:00.029 ================================ 00:24:00.029 Vendor ID: 0000 00:24:00.029 Subsystem Vendor ID: 0000 00:24:00.029 Serial Number: 6c43bf2111359613782e 00:24:00.029 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:00.029 Firmware Version: 6.7.0-68 00:24:00.029 Recommended Arb Burst: 6 00:24:00.029 IEEE OUI Identifier: 00 00 00 00:24:00.029 Multi-path I/O 00:24:00.029 May have multiple subsystem ports: Yes 00:24:00.029 May have multiple controllers: Yes 00:24:00.029 Associated with SR-IOV VF: No 00:24:00.029 Max Data Transfer Size: Unlimited 00:24:00.029 Max Number of Namespaces: 1024 00:24:00.029 Max Number of I/O Queues: 128 00:24:00.029 NVMe Specification Version (VS): 1.3 00:24:00.029 NVMe Specification Version (Identify): 1.3 00:24:00.029 Maximum Queue Entries: 1024 00:24:00.029 Contiguous Queues Required: No 00:24:00.029 Arbitration Mechanisms Supported 00:24:00.030 Weighted Round Robin: Not Supported 00:24:00.030 Vendor Specific: Not Supported 00:24:00.030 Reset Timeout: 7500 ms 00:24:00.030 Doorbell Stride: 4 bytes 00:24:00.030 NVM Subsystem Reset: Not Supported 00:24:00.030 Command Sets Supported 00:24:00.030 NVM Command Set: Supported 00:24:00.030 Boot Partition: Not Supported 00:24:00.030 Memory Page Size Minimum: 4096 bytes 00:24:00.030 Memory Page Size Maximum: 4096 bytes 00:24:00.030 Persistent Memory Region: Not Supported 00:24:00.030 Optional Asynchronous Events Supported 00:24:00.030 Namespace Attribute Notices: Supported 00:24:00.030 Firmware Activation Notices: Not Supported 00:24:00.030 ANA Change Notices: Supported 00:24:00.030 PLE Aggregate Log Change Notices: Not Supported 00:24:00.030 LBA Status Info Alert Notices: Not Supported 00:24:00.030 EGE Aggregate Log Change Notices: Not Supported 00:24:00.030 Normal NVM Subsystem Shutdown event: Not Supported 00:24:00.030 Zone Descriptor Change Notices: Not Supported 00:24:00.030 Discovery Log Change Notices: Not Supported 00:24:00.030 Controller Attributes 00:24:00.030 128-bit Host Identifier: Supported 00:24:00.030 Non-Operational Permissive Mode: Not Supported 00:24:00.030 NVM Sets: Not Supported 00:24:00.030 Read Recovery Levels: Not Supported 00:24:00.030 Endurance Groups: Not Supported 00:24:00.030 Predictable Latency Mode: Not Supported 00:24:00.030 Traffic Based Keep ALive: Supported 00:24:00.030 Namespace Granularity: Not Supported 00:24:00.030 SQ Associations: Not Supported 00:24:00.030 UUID List: Not Supported 00:24:00.030 Multi-Domain Subsystem: Not Supported 00:24:00.030 Fixed Capacity Management: Not Supported 00:24:00.030 Variable Capacity Management: Not Supported 00:24:00.030 Delete Endurance Group: Not Supported 00:24:00.030 Delete NVM Set: Not Supported 00:24:00.030 Extended LBA Formats Supported: Not Supported 00:24:00.030 Flexible Data Placement Supported: Not Supported 00:24:00.030 00:24:00.030 Controller Memory Buffer Support 00:24:00.030 ================================ 00:24:00.030 Supported: No 00:24:00.030 00:24:00.030 Persistent Memory Region Support 00:24:00.030 ================================ 00:24:00.030 Supported: No 00:24:00.030 00:24:00.030 Admin Command Set Attributes 00:24:00.030 ============================ 00:24:00.030 Security Send/Receive: Not Supported 00:24:00.030 Format NVM: Not Supported 00:24:00.030 Firmware Activate/Download: Not Supported 00:24:00.030 Namespace Management: Not Supported 00:24:00.030 Device Self-Test: Not Supported 00:24:00.030 Directives: Not Supported 00:24:00.030 NVMe-MI: Not Supported 00:24:00.030 Virtualization Management: Not Supported 00:24:00.030 Doorbell Buffer Config: Not Supported 00:24:00.030 Get LBA Status Capability: Not Supported 00:24:00.030 Command & Feature Lockdown Capability: Not Supported 00:24:00.030 Abort Command Limit: 4 00:24:00.030 Async Event Request Limit: 4 00:24:00.030 Number of Firmware Slots: N/A 00:24:00.030 Firmware Slot 1 Read-Only: N/A 00:24:00.030 Firmware Activation Without Reset: N/A 00:24:00.030 Multiple Update Detection Support: N/A 00:24:00.030 Firmware Update Granularity: No Information Provided 00:24:00.030 Per-Namespace SMART Log: Yes 00:24:00.030 Asymmetric Namespace Access Log Page: Supported 00:24:00.030 ANA Transition Time : 10 sec 00:24:00.030 00:24:00.030 Asymmetric Namespace Access Capabilities 00:24:00.030 ANA Optimized State : Supported 00:24:00.030 ANA Non-Optimized State : Supported 00:24:00.030 ANA Inaccessible State : Supported 00:24:00.030 ANA Persistent Loss State : Supported 00:24:00.030 ANA Change State : Supported 00:24:00.030 ANAGRPID is not changed : No 00:24:00.030 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:00.030 00:24:00.030 ANA Group Identifier Maximum : 128 00:24:00.030 Number of ANA Group Identifiers : 128 00:24:00.030 Max Number of Allowed Namespaces : 1024 00:24:00.030 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:00.030 Command Effects Log Page: Supported 00:24:00.030 Get Log Page Extended Data: Supported 00:24:00.030 Telemetry Log Pages: Not Supported 00:24:00.030 Persistent Event Log Pages: Not Supported 00:24:00.030 Supported Log Pages Log Page: May Support 00:24:00.030 Commands Supported & Effects Log Page: Not Supported 00:24:00.030 Feature Identifiers & Effects Log Page:May Support 00:24:00.030 NVMe-MI Commands & Effects Log Page: May Support 00:24:00.030 Data Area 4 for Telemetry Log: Not Supported 00:24:00.030 Error Log Page Entries Supported: 128 00:24:00.030 Keep Alive: Supported 00:24:00.030 Keep Alive Granularity: 1000 ms 00:24:00.030 00:24:00.030 NVM Command Set Attributes 00:24:00.030 ========================== 00:24:00.030 Submission Queue Entry Size 00:24:00.030 Max: 64 00:24:00.030 Min: 64 00:24:00.030 Completion Queue Entry Size 00:24:00.030 Max: 16 00:24:00.030 Min: 16 00:24:00.030 Number of Namespaces: 1024 00:24:00.030 Compare Command: Not Supported 00:24:00.030 Write Uncorrectable Command: Not Supported 00:24:00.030 Dataset Management Command: Supported 00:24:00.030 Write Zeroes Command: Supported 00:24:00.030 Set Features Save Field: Not Supported 00:24:00.030 Reservations: Not Supported 00:24:00.030 Timestamp: Not Supported 00:24:00.030 Copy: Not Supported 00:24:00.030 Volatile Write Cache: Present 00:24:00.030 Atomic Write Unit (Normal): 1 00:24:00.030 Atomic Write Unit (PFail): 1 00:24:00.030 Atomic Compare & Write Unit: 1 00:24:00.030 Fused Compare & Write: Not Supported 00:24:00.030 Scatter-Gather List 00:24:00.030 SGL Command Set: Supported 00:24:00.030 SGL Keyed: Not Supported 00:24:00.030 SGL Bit Bucket Descriptor: Not Supported 00:24:00.030 SGL Metadata Pointer: Not Supported 00:24:00.030 Oversized SGL: Not Supported 00:24:00.030 SGL Metadata Address: Not Supported 00:24:00.030 SGL Offset: Supported 00:24:00.030 Transport SGL Data Block: Not Supported 00:24:00.030 Replay Protected Memory Block: Not Supported 00:24:00.030 00:24:00.030 Firmware Slot Information 00:24:00.030 ========================= 00:24:00.030 Active slot: 0 00:24:00.030 00:24:00.030 Asymmetric Namespace Access 00:24:00.030 =========================== 00:24:00.030 Change Count : 0 00:24:00.030 Number of ANA Group Descriptors : 1 00:24:00.030 ANA Group Descriptor : 0 00:24:00.030 ANA Group ID : 1 00:24:00.030 Number of NSID Values : 1 00:24:00.030 Change Count : 0 00:24:00.030 ANA State : 1 00:24:00.030 Namespace Identifier : 1 00:24:00.030 00:24:00.030 Commands Supported and Effects 00:24:00.030 ============================== 00:24:00.030 Admin Commands 00:24:00.030 -------------- 00:24:00.030 Get Log Page (02h): Supported 00:24:00.030 Identify (06h): Supported 00:24:00.030 Abort (08h): Supported 00:24:00.030 Set Features (09h): Supported 00:24:00.030 Get Features (0Ah): Supported 00:24:00.030 Asynchronous Event Request (0Ch): Supported 00:24:00.030 Keep Alive (18h): Supported 00:24:00.030 I/O Commands 00:24:00.030 ------------ 00:24:00.030 Flush (00h): Supported 00:24:00.030 Write (01h): Supported LBA-Change 00:24:00.030 Read (02h): Supported 00:24:00.030 Write Zeroes (08h): Supported LBA-Change 00:24:00.030 Dataset Management (09h): Supported 00:24:00.030 00:24:00.030 Error Log 00:24:00.030 ========= 00:24:00.030 Entry: 0 00:24:00.030 Error Count: 0x3 00:24:00.030 Submission Queue Id: 0x0 00:24:00.030 Command Id: 0x5 00:24:00.030 Phase Bit: 0 00:24:00.030 Status Code: 0x2 00:24:00.030 Status Code Type: 0x0 00:24:00.030 Do Not Retry: 1 00:24:00.030 Error Location: 0x28 00:24:00.030 LBA: 0x0 00:24:00.030 Namespace: 0x0 00:24:00.030 Vendor Log Page: 0x0 00:24:00.030 ----------- 00:24:00.030 Entry: 1 00:24:00.030 Error Count: 0x2 00:24:00.030 Submission Queue Id: 0x0 00:24:00.030 Command Id: 0x5 00:24:00.030 Phase Bit: 0 00:24:00.030 Status Code: 0x2 00:24:00.030 Status Code Type: 0x0 00:24:00.030 Do Not Retry: 1 00:24:00.030 Error Location: 0x28 00:24:00.030 LBA: 0x0 00:24:00.030 Namespace: 0x0 00:24:00.030 Vendor Log Page: 0x0 00:24:00.030 ----------- 00:24:00.030 Entry: 2 00:24:00.030 Error Count: 0x1 00:24:00.030 Submission Queue Id: 0x0 00:24:00.030 Command Id: 0x4 00:24:00.030 Phase Bit: 0 00:24:00.030 Status Code: 0x2 00:24:00.030 Status Code Type: 0x0 00:24:00.030 Do Not Retry: 1 00:24:00.030 Error Location: 0x28 00:24:00.030 LBA: 0x0 00:24:00.030 Namespace: 0x0 00:24:00.030 Vendor Log Page: 0x0 00:24:00.030 00:24:00.030 Number of Queues 00:24:00.030 ================ 00:24:00.030 Number of I/O Submission Queues: 128 00:24:00.030 Number of I/O Completion Queues: 128 00:24:00.030 00:24:00.030 ZNS Specific Controller Data 00:24:00.030 ============================ 00:24:00.030 Zone Append Size Limit: 0 00:24:00.030 00:24:00.030 00:24:00.030 Active Namespaces 00:24:00.030 ================= 00:24:00.030 get_feature(0x05) failed 00:24:00.031 Namespace ID:1 00:24:00.031 Command Set Identifier: NVM (00h) 00:24:00.031 Deallocate: Supported 00:24:00.031 Deallocated/Unwritten Error: Not Supported 00:24:00.031 Deallocated Read Value: Unknown 00:24:00.031 Deallocate in Write Zeroes: Not Supported 00:24:00.031 Deallocated Guard Field: 0xFFFF 00:24:00.031 Flush: Supported 00:24:00.031 Reservation: Not Supported 00:24:00.031 Namespace Sharing Capabilities: Multiple Controllers 00:24:00.031 Size (in LBAs): 1953525168 (931GiB) 00:24:00.031 Capacity (in LBAs): 1953525168 (931GiB) 00:24:00.031 Utilization (in LBAs): 1953525168 (931GiB) 00:24:00.031 UUID: 91bf2e57-0bb6-45bd-9e0e-cd6ab9d4ebb8 00:24:00.031 Thin Provisioning: Not Supported 00:24:00.031 Per-NS Atomic Units: Yes 00:24:00.031 Atomic Boundary Size (Normal): 0 00:24:00.031 Atomic Boundary Size (PFail): 0 00:24:00.031 Atomic Boundary Offset: 0 00:24:00.031 NGUID/EUI64 Never Reused: No 00:24:00.031 ANA group ID: 1 00:24:00.031 Namespace Write Protected: No 00:24:00.031 Number of LBA Formats: 1 00:24:00.031 Current LBA Format: LBA Format #00 00:24:00.031 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:00.031 00:24:00.031 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:00.031 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:00.031 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:00.031 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:00.031 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:00.031 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:00.031 00:39:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:00.031 rmmod nvme_tcp 00:24:00.031 rmmod nvme_fabrics 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.031 00:39:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.957 00:39:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:01.957 00:39:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:01.957 00:39:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:01.957 00:39:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:01.957 00:39:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:01.957 00:39:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:01.957 00:39:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:01.957 00:39:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:01.957 00:39:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:01.957 00:39:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:01.957 00:39:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:03.329 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:03.329 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:03.329 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:03.329 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:03.329 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:03.329 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:03.329 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:03.329 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:03.329 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:03.329 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:03.329 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:03.329 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:03.329 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:03.329 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:03.329 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:03.329 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:04.703 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:04.703 00:24:04.703 real 0m9.832s 00:24:04.703 user 0m2.246s 00:24:04.703 sys 0m3.771s 00:24:04.703 00:39:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:04.703 00:39:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.703 ************************************ 00:24:04.703 END TEST nvmf_identify_kernel_target 00:24:04.703 ************************************ 00:24:04.703 00:39:30 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:04.703 00:39:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:04.703 00:39:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:04.703 00:39:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.703 ************************************ 00:24:04.703 START TEST nvmf_auth_host 00:24:04.703 ************************************ 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:04.703 * Looking for test storage... 00:24:04.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.703 00:39:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.704 00:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:07.232 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:07.232 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:07.232 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:07.232 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.232 00:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.232 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.232 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.232 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.232 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.232 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.232 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.232 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:24:07.232 00:24:07.232 --- 10.0.0.2 ping statistics --- 00:24:07.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.232 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:24:07.232 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:24:07.232 00:24:07.232 --- 10.0.0.1 ping statistics --- 00:24:07.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.233 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=973087 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 973087 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 973087 ']' 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:07.233 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d5b0cb0d235e420e8fdaafa5a67a997d 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9lJ 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d5b0cb0d235e420e8fdaafa5a67a997d 0 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d5b0cb0d235e420e8fdaafa5a67a997d 0 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d5b0cb0d235e420e8fdaafa5a67a997d 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9lJ 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9lJ 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.9lJ 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=675cbc728b40c26bdc24f78c3edd30d0b539979399338e5cf38f9fa318616689 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5N2 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 675cbc728b40c26bdc24f78c3edd30d0b539979399338e5cf38f9fa318616689 3 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 675cbc728b40c26bdc24f78c3edd30d0b539979399338e5cf38f9fa318616689 3 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=675cbc728b40c26bdc24f78c3edd30d0b539979399338e5cf38f9fa318616689 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5N2 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5N2 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.5N2 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1ccc1a55d42a530c313d864cbe19733b19eaeb6d9fe34e4a 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cp9 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1ccc1a55d42a530c313d864cbe19733b19eaeb6d9fe34e4a 0 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1ccc1a55d42a530c313d864cbe19733b19eaeb6d9fe34e4a 0 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1ccc1a55d42a530c313d864cbe19733b19eaeb6d9fe34e4a 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cp9 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cp9 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.cp9 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=36cdcd7e841902d49197b9fa90e4f333059cfef9fe1fc3c7 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VKh 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 36cdcd7e841902d49197b9fa90e4f333059cfef9fe1fc3c7 2 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 36cdcd7e841902d49197b9fa90e4f333059cfef9fe1fc3c7 2 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=36cdcd7e841902d49197b9fa90e4f333059cfef9fe1fc3c7 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:07.490 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VKh 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VKh 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.VKh 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=166c08d3e4c72496fb6d044ea0629ce4 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.M0x 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 166c08d3e4c72496fb6d044ea0629ce4 1 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 166c08d3e4c72496fb6d044ea0629ce4 1 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=166c08d3e4c72496fb6d044ea0629ce4 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.M0x 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.M0x 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.M0x 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c9a8d26eceede0389536332bf9763186 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LO4 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c9a8d26eceede0389536332bf9763186 1 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c9a8d26eceede0389536332bf9763186 1 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c9a8d26eceede0389536332bf9763186 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LO4 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LO4 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.LO4 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=55787f17fd3d24aeece9abf2dcdfc8a4c524e19d47ac46ca 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.97P 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 55787f17fd3d24aeece9abf2dcdfc8a4c524e19d47ac46ca 2 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 55787f17fd3d24aeece9abf2dcdfc8a4c524e19d47ac46ca 2 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=55787f17fd3d24aeece9abf2dcdfc8a4c524e19d47ac46ca 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.97P 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.97P 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.97P 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6877bd53bcd2b9b40c5989e1b8fbf256 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sln 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6877bd53bcd2b9b40c5989e1b8fbf256 0 00:24:07.748 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6877bd53bcd2b9b40c5989e1b8fbf256 0 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6877bd53bcd2b9b40c5989e1b8fbf256 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sln 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sln 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.sln 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0547e43c90a96bbd6de2f04629273411f0cc600dd988233f26e0d72119d0e8b0 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.V7h 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0547e43c90a96bbd6de2f04629273411f0cc600dd988233f26e0d72119d0e8b0 3 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0547e43c90a96bbd6de2f04629273411f0cc600dd988233f26e0d72119d0e8b0 3 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0547e43c90a96bbd6de2f04629273411f0cc600dd988233f26e0d72119d0e8b0 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:07.749 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:08.005 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.V7h 00:24:08.005 00:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.V7h 00:24:08.005 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.V7h 00:24:08.005 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:08.005 00:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 973087 00:24:08.005 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 973087 ']' 00:24:08.005 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.006 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:08.006 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.006 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:08.006 00:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9lJ 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.5N2 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5N2 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.cp9 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.VKh ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VKh 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.M0x 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.LO4 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LO4 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.97P 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.sln ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.sln 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.V7h 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:08.263 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:08.264 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:08.264 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:08.264 00:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:09.635 Waiting for block devices as requested 00:24:09.635 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:09.635 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:09.635 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:09.635 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:09.635 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:09.892 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:09.892 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:09.892 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:09.892 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:10.149 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:10.149 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:10.149 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:10.407 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:10.407 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:10.407 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:10.407 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:10.664 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:10.920 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:10.920 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:10.920 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:10.920 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:24:10.920 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:10.920 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:24:10.920 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:10.920 00:39:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:10.920 00:39:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:11.178 No valid GPT data, bailing 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:11.178 00:24:11.178 Discovery Log Number of Records 2, Generation counter 2 00:24:11.178 =====Discovery Log Entry 0====== 00:24:11.178 trtype: tcp 00:24:11.178 adrfam: ipv4 00:24:11.178 subtype: current discovery subsystem 00:24:11.178 treq: not specified, sq flow control disable supported 00:24:11.178 portid: 1 00:24:11.178 trsvcid: 4420 00:24:11.178 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:11.178 traddr: 10.0.0.1 00:24:11.178 eflags: none 00:24:11.178 sectype: none 00:24:11.178 =====Discovery Log Entry 1====== 00:24:11.178 trtype: tcp 00:24:11.178 adrfam: ipv4 00:24:11.178 subtype: nvme subsystem 00:24:11.178 treq: not specified, sq flow control disable supported 00:24:11.178 portid: 1 00:24:11.178 trsvcid: 4420 00:24:11.178 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:11.178 traddr: 10.0.0.1 00:24:11.178 eflags: none 00:24:11.178 sectype: none 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.178 nvme0n1 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.178 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.436 nvme0n1 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.436 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.437 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.694 nvme0n1 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.694 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.695 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 nvme0n1 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:11.952 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.953 00:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.953 nvme0n1 00:24:11.953 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.953 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.953 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.953 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.953 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.953 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.953 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.953 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.953 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.953 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.210 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.211 nvme0n1 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.211 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.468 nvme0n1 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.469 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.726 nvme0n1 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:12.726 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.727 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.984 nvme0n1 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.984 00:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.984 nvme0n1 00:24:12.984 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.984 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.984 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.984 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.984 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.241 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.242 nvme0n1 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.242 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.499 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.500 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.756 nvme0n1 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.756 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.757 00:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.014 nvme0n1 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.014 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.272 nvme0n1 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.272 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.273 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.273 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.273 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:14.273 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.273 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.531 nvme0n1 00:24:14.531 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.531 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.531 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.531 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.531 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.531 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.531 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.531 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.531 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.531 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.789 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.790 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.052 nvme0n1 00:24:15.052 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.052 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.052 00:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.052 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.052 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.052 00:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.052 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.616 nvme0n1 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.616 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.617 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.617 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.617 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.617 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.617 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.617 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.617 00:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.617 00:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.617 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.617 00:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.181 nvme0n1 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.181 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.747 nvme0n1 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.747 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.748 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.748 00:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.748 00:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:16.748 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.748 00:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.312 nvme0n1 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.312 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.877 nvme0n1 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.877 00:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.809 nvme0n1 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.810 00:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.741 nvme0n1 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:19.741 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.742 00:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.673 nvme0n1 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.673 00:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.604 nvme0n1 00:24:21.604 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.604 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.604 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.604 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.604 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.604 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.861 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.862 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.862 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.862 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.862 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.862 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.862 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.862 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.862 00:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.862 00:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.862 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.862 00:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.792 nvme0n1 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.792 nvme0n1 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.792 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.793 00:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.051 nvme0n1 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.051 nvme0n1 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.051 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:23.309 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.310 nvme0n1 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.310 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.567 nvme0n1 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.567 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.568 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.568 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.568 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.568 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.568 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.568 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.825 nvme0n1 00:24:23.825 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.825 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.825 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.825 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.825 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.825 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.825 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.825 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.826 00:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.084 nvme0n1 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.084 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.341 nvme0n1 00:24:24.341 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.342 nvme0n1 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.342 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.627 nvme0n1 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.627 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.908 00:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.908 nvme0n1 00:24:24.908 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.908 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.908 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.908 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.908 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.908 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.166 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.424 nvme0n1 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.424 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.682 nvme0n1 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.682 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.940 nvme0n1 00:24:25.940 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.940 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.940 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.940 00:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.940 00:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.940 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.198 nvme0n1 00:24:26.198 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.198 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.198 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.198 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.198 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.198 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.198 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.198 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.198 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.198 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.455 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.455 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.456 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.021 nvme0n1 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.021 00:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.587 nvme0n1 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.587 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.844 nvme0n1 00:24:27.844 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.844 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.844 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.844 00:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.844 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.844 00:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.101 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.102 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.359 nvme0n1 00:24:28.359 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.359 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.359 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.359 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.359 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.616 00:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.179 nvme0n1 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:29.179 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:29.180 00:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.109 nvme0n1 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:30.109 00:39:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.041 nvme0n1 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.041 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.042 00:39:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.974 nvme0n1 00:24:31.974 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.974 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.974 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.974 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.974 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.974 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.974 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.974 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.974 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.975 00:39:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.907 nvme0n1 00:24:32.907 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.907 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.907 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.907 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.907 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.907 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.908 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.908 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.908 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.908 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.165 00:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.166 00:39:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.166 00:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.166 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.166 00:39:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.098 nvme0n1 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.098 nvme0n1 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.098 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.357 nvme0n1 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:34.357 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.358 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.616 nvme0n1 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.616 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.875 nvme0n1 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.875 nvme0n1 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.875 00:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.875 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.133 nvme0n1 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.133 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.390 nvme0n1 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.390 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.391 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.391 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.391 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.391 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.648 nvme0n1 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.648 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.907 nvme0n1 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.907 00:40:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.184 nvme0n1 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.184 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.452 nvme0n1 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.452 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.453 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.710 nvme0n1 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.710 00:40:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.968 nvme0n1 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.968 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.969 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.229 nvme0n1 00:24:37.229 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.229 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.229 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.229 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.229 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.487 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.744 nvme0n1 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:37.744 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.745 00:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.310 nvme0n1 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.310 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.875 nvme0n1 00:24:38.875 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.876 00:40:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.441 nvme0n1 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.441 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.006 nvme0n1 00:24:40.006 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.006 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.006 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.006 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.006 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.006 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.006 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.006 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.006 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.006 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.006 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.007 00:40:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.007 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.608 nvme0n1 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViMGNiMGQyMzVlNDIwZThmZGFhZmE1YTY3YTk5N2RJpxwu: 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: ]] 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc1Y2JjNzI4YjQwYzI2YmRjMjRmNzhjM2VkZDMwZDBiNTM5OTc5Mzk5MzM4ZTVjZjM4ZjlmYTMxODYxNjY4OUNRjlo=: 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.608 00:40:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.541 nvme0n1 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.541 00:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.542 00:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.542 00:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.542 00:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.542 00:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.542 00:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.542 00:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.542 00:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.542 00:40:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.542 00:40:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.542 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.542 00:40:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.475 nvme0n1 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTY2YzA4ZDNlNGM3MjQ5NmZiNmQwNDRlYTA2MjljZTTFMNAN: 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: ]] 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzlhOGQyNmVjZWVkZTAzODk1MzYzMzJiZjk3NjMxODaFQXlm: 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.475 00:40:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.406 nvme0n1 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTU3ODdmMTdmZDNkMjRhZWVjZTlhYmYyZGNkZmM4YTRjNTI0ZTE5ZDQ3YWM0NmNhZzVBHg==: 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: ]] 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Njg3N2JkNTNiY2QyYjliNDBjNTk4OWUxYjhmYmYyNTY7lfiv: 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.406 00:40:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.338 nvme0n1 00:24:44.338 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.338 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.338 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.595 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0N2U0M2M5MGE5NmJiZDZkZTJmMDQ2MjkyNzM0MTFmMGNjNjAwZGQ5ODgyMzNmMjZlMGQ3MjExOWQwZThiMB3wZQw=: 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.596 00:40:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.529 nvme0n1 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNjYzFhNTVkNDJhNTMwYzMxM2Q4NjRjYmUxOTczM2IxOWVhZWI2ZDlmZTM0ZTRhwjKr2Q==: 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzZjZGNkN2U4NDE5MDJkNDkxOTdiOWZhOTBlNGYzMzMwNTljZmVmOWZlMWZjM2M3a24e5A==: 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.529 request: 00:24:45.529 { 00:24:45.529 "name": "nvme0", 00:24:45.529 "trtype": "tcp", 00:24:45.529 "traddr": "10.0.0.1", 00:24:45.529 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:45.529 "adrfam": "ipv4", 00:24:45.529 "trsvcid": "4420", 00:24:45.529 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:45.529 "method": "bdev_nvme_attach_controller", 00:24:45.529 "req_id": 1 00:24:45.529 } 00:24:45.529 Got JSON-RPC error response 00:24:45.529 response: 00:24:45.529 { 00:24:45.529 "code": -32602, 00:24:45.529 "message": "Invalid parameters" 00:24:45.529 } 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.529 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.530 request: 00:24:45.530 { 00:24:45.530 "name": "nvme0", 00:24:45.530 "trtype": "tcp", 00:24:45.530 "traddr": "10.0.0.1", 00:24:45.530 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:45.530 "adrfam": "ipv4", 00:24:45.530 "trsvcid": "4420", 00:24:45.530 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:45.530 "dhchap_key": "key2", 00:24:45.530 "method": "bdev_nvme_attach_controller", 00:24:45.530 "req_id": 1 00:24:45.530 } 00:24:45.530 Got JSON-RPC error response 00:24:45.530 response: 00:24:45.530 { 00:24:45.530 "code": -32602, 00:24:45.530 "message": "Invalid parameters" 00:24:45.530 } 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.530 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:45.788 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.789 request: 00:24:45.789 { 00:24:45.789 "name": "nvme0", 00:24:45.789 "trtype": "tcp", 00:24:45.789 "traddr": "10.0.0.1", 00:24:45.789 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:45.789 "adrfam": "ipv4", 00:24:45.789 "trsvcid": "4420", 00:24:45.789 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:45.789 "dhchap_key": "key1", 00:24:45.789 "dhchap_ctrlr_key": "ckey2", 00:24:45.789 "method": "bdev_nvme_attach_controller", 00:24:45.789 "req_id": 1 00:24:45.789 } 00:24:45.789 Got JSON-RPC error response 00:24:45.789 response: 00:24:45.789 { 00:24:45.789 "code": -32602, 00:24:45.789 "message": "Invalid parameters" 00:24:45.789 } 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.789 rmmod nvme_tcp 00:24:45.789 rmmod nvme_fabrics 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 973087 ']' 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 973087 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 973087 ']' 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 973087 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 973087 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 973087' 00:24:45.789 killing process with pid 973087 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 973087 00:24:45.789 00:40:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 973087 00:24:46.048 00:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:46.048 00:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:46.048 00:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:46.048 00:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.048 00:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.048 00:40:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.048 00:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.048 00:40:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:48.582 00:40:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:49.515 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:49.515 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:49.515 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:50.453 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:50.710 00:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.9lJ /tmp/spdk.key-null.cp9 /tmp/spdk.key-sha256.M0x /tmp/spdk.key-sha384.97P /tmp/spdk.key-sha512.V7h /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:50.710 00:40:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:52.083 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:52.083 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:52.083 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:52.083 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:52.083 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:52.083 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:52.083 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:52.083 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:52.083 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:52.083 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:52.083 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:52.083 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:52.083 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:52.083 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:52.083 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:52.083 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:52.083 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:52.083 00:24:52.083 real 0m47.414s 00:24:52.083 user 0m44.683s 00:24:52.083 sys 0m6.127s 00:24:52.083 00:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:52.083 00:40:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.083 ************************************ 00:24:52.083 END TEST nvmf_auth_host 00:24:52.083 ************************************ 00:24:52.083 00:40:18 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:24:52.083 00:40:18 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:52.083 00:40:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:52.083 00:40:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:52.083 00:40:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:52.083 ************************************ 00:24:52.083 START TEST nvmf_digest 00:24:52.083 ************************************ 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:52.083 * Looking for test storage... 00:24:52.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.083 00:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:54.614 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:54.614 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:54.614 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:54.614 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:54.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:24:54.614 00:24:54.614 --- 10.0.0.2 ping statistics --- 00:24:54.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.614 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:24:54.614 00:24:54.614 --- 10.0.0.1 ping statistics --- 00:24:54.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.614 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:54.614 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.615 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:54.615 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:54.615 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.615 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:54.615 00:40:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:54.615 00:40:20 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:54.615 00:40:20 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:54.615 00:40:20 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:54.615 00:40:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:24:54.615 00:40:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:54.615 00:40:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:54.911 ************************************ 00:24:54.911 START TEST nvmf_digest_clean 00:24:54.911 ************************************ 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # run_digest 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=982842 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 982842 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 982842 ']' 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:54.911 00:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:54.911 [2024-05-15 00:40:20.844121] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:24:54.911 [2024-05-15 00:40:20.844199] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.911 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.911 [2024-05-15 00:40:20.918539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.911 [2024-05-15 00:40:21.022135] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.911 [2024-05-15 00:40:21.022187] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.911 [2024-05-15 00:40:21.022221] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.911 [2024-05-15 00:40:21.022232] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.911 [2024-05-15 00:40:21.022242] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.911 [2024-05-15 00:40:21.022273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:55.170 null0 00:24:55.170 [2024-05-15 00:40:21.187994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.170 [2024-05-15 00:40:21.211979] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:55.170 [2024-05-15 00:40:21.212246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=982978 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 982978 /var/tmp/bperf.sock 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 982978 ']' 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:55.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:55.170 00:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:55.170 [2024-05-15 00:40:21.258046] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:24:55.170 [2024-05-15 00:40:21.258115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982978 ] 00:24:55.170 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.170 [2024-05-15 00:40:21.331526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.428 [2024-05-15 00:40:21.447733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.362 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:56.362 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:24:56.362 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:56.362 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:56.362 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:56.620 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.620 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.877 nvme0n1 00:24:56.877 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:56.877 00:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:56.877 Running I/O for 2 seconds... 00:24:59.409 00:24:59.409 Latency(us) 00:24:59.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.409 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:59.409 nvme0n1 : 2.00 19545.57 76.35 0.00 0.00 6540.65 3203.98 12621.75 00:24:59.409 =================================================================================================================== 00:24:59.409 Total : 19545.57 76.35 0.00 0.00 6540.65 3203.98 12621.75 00:24:59.409 0 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:59.409 | select(.opcode=="crc32c") 00:24:59.409 | "\(.module_name) \(.executed)"' 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 982978 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 982978 ']' 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 982978 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 982978 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 982978' 00:24:59.409 killing process with pid 982978 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 982978 00:24:59.409 Received shutdown signal, test time was about 2.000000 seconds 00:24:59.409 00:24:59.409 Latency(us) 00:24:59.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.409 =================================================================================================================== 00:24:59.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.409 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 982978 00:24:59.667 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:59.667 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:59.667 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:59.667 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:59.667 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:59.667 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:59.667 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:59.667 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=983405 00:24:59.667 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:59.668 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 983405 /var/tmp/bperf.sock 00:24:59.668 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 983405 ']' 00:24:59.668 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:59.668 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:59.668 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:59.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:59.668 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:59.668 00:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:59.668 [2024-05-15 00:40:25.641457] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:24:59.668 [2024-05-15 00:40:25.641552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983405 ] 00:24:59.668 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:59.668 Zero copy mechanism will not be used. 00:24:59.668 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.668 [2024-05-15 00:40:25.719793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.926 [2024-05-15 00:40:25.836827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.490 00:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:00.490 00:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:00.490 00:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:00.490 00:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:00.490 00:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:01.057 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.057 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.314 nvme0n1 00:25:01.314 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:01.314 00:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:01.314 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:01.314 Zero copy mechanism will not be used. 00:25:01.314 Running I/O for 2 seconds... 00:25:03.842 00:25:03.842 Latency(us) 00:25:03.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.842 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:03.842 nvme0n1 : 2.01 2391.87 298.98 0.00 0.00 6685.51 6310.87 8689.59 00:25:03.842 =================================================================================================================== 00:25:03.842 Total : 2391.87 298.98 0.00 0.00 6685.51 6310.87 8689.59 00:25:03.842 0 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:03.842 | select(.opcode=="crc32c") 00:25:03.842 | "\(.module_name) \(.executed)"' 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 983405 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 983405 ']' 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 983405 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:03.842 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:03.843 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 983405 00:25:03.843 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:03.843 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:03.843 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 983405' 00:25:03.843 killing process with pid 983405 00:25:03.843 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 983405 00:25:03.843 Received shutdown signal, test time was about 2.000000 seconds 00:25:03.843 00:25:03.843 Latency(us) 00:25:03.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.843 =================================================================================================================== 00:25:03.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:03.843 00:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 983405 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=983938 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 983938 /var/tmp/bperf.sock 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 983938 ']' 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:04.100 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:04.100 [2024-05-15 00:40:30.093747] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:04.101 [2024-05-15 00:40:30.093824] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983938 ] 00:25:04.101 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.101 [2024-05-15 00:40:30.163744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.359 [2024-05-15 00:40:30.277536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.359 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:04.359 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:04.359 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:04.359 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:04.359 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:04.617 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:04.617 00:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.183 nvme0n1 00:25:05.183 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:05.183 00:40:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:05.183 Running I/O for 2 seconds... 00:25:07.710 00:25:07.710 Latency(us) 00:25:07.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.710 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:07.710 nvme0n1 : 2.01 18481.77 72.19 0.00 0.00 6913.96 5898.24 20486.07 00:25:07.710 =================================================================================================================== 00:25:07.710 Total : 18481.77 72.19 0.00 0.00 6913.96 5898.24 20486.07 00:25:07.710 0 00:25:07.710 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:07.710 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:07.710 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:07.710 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:07.710 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:07.710 | select(.opcode=="crc32c") 00:25:07.710 | "\(.module_name) \(.executed)"' 00:25:07.710 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:07.710 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:07.710 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 983938 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 983938 ']' 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 983938 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 983938 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 983938' 00:25:07.711 killing process with pid 983938 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 983938 00:25:07.711 Received shutdown signal, test time was about 2.000000 seconds 00:25:07.711 00:25:07.711 Latency(us) 00:25:07.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.711 =================================================================================================================== 00:25:07.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.711 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 983938 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=984465 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 984465 /var/tmp/bperf.sock 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 984465 ']' 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:07.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:07.969 00:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:07.969 [2024-05-15 00:40:33.971688] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:07.969 [2024-05-15 00:40:33.971783] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984465 ] 00:25:07.969 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:07.969 Zero copy mechanism will not be used. 00:25:07.969 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.969 [2024-05-15 00:40:34.040655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.227 [2024-05-15 00:40:34.149996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.159 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:09.160 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:25:09.160 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:09.160 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:09.160 00:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:09.417 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.417 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.675 nvme0n1 00:25:09.675 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:09.675 00:40:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:09.675 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:09.675 Zero copy mechanism will not be used. 00:25:09.675 Running I/O for 2 seconds... 00:25:12.201 00:25:12.201 Latency(us) 00:25:12.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.201 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:12.201 nvme0n1 : 2.01 1479.86 184.98 0.00 0.00 10780.58 3301.07 12913.02 00:25:12.201 =================================================================================================================== 00:25:12.201 Total : 1479.86 184.98 0.00 0.00 10780.58 3301.07 12913.02 00:25:12.201 0 00:25:12.201 00:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:12.201 00:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:12.201 00:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:12.201 00:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:12.201 00:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:12.201 | select(.opcode=="crc32c") 00:25:12.201 | "\(.module_name) \(.executed)"' 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 984465 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 984465 ']' 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 984465 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 984465 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 984465' 00:25:12.201 killing process with pid 984465 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 984465 00:25:12.201 Received shutdown signal, test time was about 2.000000 seconds 00:25:12.201 00:25:12.201 Latency(us) 00:25:12.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.201 =================================================================================================================== 00:25:12.201 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.201 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 984465 00:25:12.490 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 982842 00:25:12.490 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 982842 ']' 00:25:12.490 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 982842 00:25:12.490 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:25:12.490 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:12.490 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 982842 00:25:12.490 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:12.490 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:12.490 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 982842' 00:25:12.490 killing process with pid 982842 00:25:12.490 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 982842 00:25:12.490 [2024-05-15 00:40:38.444046] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:12.490 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 982842 00:25:12.748 00:25:12.748 real 0m17.924s 00:25:12.748 user 0m36.912s 00:25:12.748 sys 0m3.910s 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:12.748 ************************************ 00:25:12.748 END TEST nvmf_digest_clean 00:25:12.748 ************************************ 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:12.748 ************************************ 00:25:12.748 START TEST nvmf_digest_error 00:25:12.748 ************************************ 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # run_digest_error 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=985033 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 985033 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 985033 ']' 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:12.748 00:40:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.748 [2024-05-15 00:40:38.828081] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:12.748 [2024-05-15 00:40:38.828176] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.748 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.748 [2024-05-15 00:40:38.908807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.007 [2024-05-15 00:40:39.022627] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.007 [2024-05-15 00:40:39.022693] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.007 [2024-05-15 00:40:39.022720] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.007 [2024-05-15 00:40:39.022733] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.007 [2024-05-15 00:40:39.022745] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.007 [2024-05-15 00:40:39.022777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.941 [2024-05-15 00:40:39.825310] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.941 null0 00:25:13.941 [2024-05-15 00:40:39.939191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.941 [2024-05-15 00:40:39.963200] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:13.941 [2024-05-15 00:40:39.963483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=985185 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 985185 /var/tmp/bperf.sock 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 985185 ']' 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:13.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:13.941 00:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.941 [2024-05-15 00:40:40.010630] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:13.941 [2024-05-15 00:40:40.010729] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985185 ] 00:25:13.941 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.941 [2024-05-15 00:40:40.092727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.199 [2024-05-15 00:40:40.211563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.132 00:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:15.132 00:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:15.132 00:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:15.132 00:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:15.132 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:15.132 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.132 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.132 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.132 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.132 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.697 nvme0n1 00:25:15.697 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:15.697 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.697 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.697 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.697 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:15.697 00:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:15.697 Running I/O for 2 seconds... 00:25:15.697 [2024-05-15 00:40:41.796204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.697 [2024-05-15 00:40:41.796267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.697 [2024-05-15 00:40:41.796289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.697 [2024-05-15 00:40:41.813066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.697 [2024-05-15 00:40:41.813098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.697 [2024-05-15 00:40:41.813131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.697 [2024-05-15 00:40:41.825945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.697 [2024-05-15 00:40:41.825992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.697 [2024-05-15 00:40:41.826010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.697 [2024-05-15 00:40:41.841185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.697 [2024-05-15 00:40:41.841215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.697 [2024-05-15 00:40:41.841257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.697 [2024-05-15 00:40:41.855176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.697 [2024-05-15 00:40:41.855205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.697 [2024-05-15 00:40:41.855238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:41.869741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:41.869773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:41.869792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:41.883003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:41.883032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:41.883066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:41.897133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:41.897177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:41.897193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:41.911788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:41.911821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:41.911840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:41.925708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:41.925742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:41.925760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:41.939882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:41.939915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:41.939941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:41.953520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:41.953553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:41.953572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:41.967015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:41.967049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:41.967082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:41.982401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:41.982435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:41.982454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:41.994356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:41.994390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:41.994408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:42.009689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:42.009722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:42.009741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:42.022904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:42.022945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:42.022965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:42.037907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:42.037948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:42.037983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:42.053333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:42.053367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:42.053386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:42.064762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:42.064795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:42.064813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:42.082227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:42.082273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:42.082292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:42.098690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:42.098724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:42.098743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.955 [2024-05-15 00:40:42.113276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:15.955 [2024-05-15 00:40:42.113310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.955 [2024-05-15 00:40:42.113328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.213 [2024-05-15 00:40:42.127862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.213 [2024-05-15 00:40:42.127895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.213 [2024-05-15 00:40:42.127914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.213 [2024-05-15 00:40:42.140505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.213 [2024-05-15 00:40:42.140539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.213 [2024-05-15 00:40:42.140558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.213 [2024-05-15 00:40:42.155717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.213 [2024-05-15 00:40:42.155750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.213 [2024-05-15 00:40:42.155768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.213 [2024-05-15 00:40:42.168899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.213 [2024-05-15 00:40:42.168939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.213 [2024-05-15 00:40:42.168959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.213 [2024-05-15 00:40:42.182848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.213 [2024-05-15 00:40:42.182881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.213 [2024-05-15 00:40:42.182900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.213 [2024-05-15 00:40:42.197891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.213 [2024-05-15 00:40:42.197924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.213 [2024-05-15 00:40:42.197951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.213 [2024-05-15 00:40:42.212504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.213 [2024-05-15 00:40:42.212536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.213 [2024-05-15 00:40:42.212565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.213 [2024-05-15 00:40:42.225129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.213 [2024-05-15 00:40:42.225158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.213 [2024-05-15 00:40:42.225175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.213 [2024-05-15 00:40:42.240599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.213 [2024-05-15 00:40:42.240631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.213 [2024-05-15 00:40:42.240650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.213 [2024-05-15 00:40:42.254691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.213 [2024-05-15 00:40:42.254724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.213 [2024-05-15 00:40:42.254742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.214 [2024-05-15 00:40:42.267142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.214 [2024-05-15 00:40:42.267171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.214 [2024-05-15 00:40:42.267202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.214 [2024-05-15 00:40:42.282311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.214 [2024-05-15 00:40:42.282345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.214 [2024-05-15 00:40:42.282363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.214 [2024-05-15 00:40:42.295800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.214 [2024-05-15 00:40:42.295833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.214 [2024-05-15 00:40:42.295851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.214 [2024-05-15 00:40:42.310381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.214 [2024-05-15 00:40:42.310415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.214 [2024-05-15 00:40:42.310434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.214 [2024-05-15 00:40:42.324643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.214 [2024-05-15 00:40:42.324677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.214 [2024-05-15 00:40:42.324696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.214 [2024-05-15 00:40:42.339165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.214 [2024-05-15 00:40:42.339194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.214 [2024-05-15 00:40:42.339210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.214 [2024-05-15 00:40:42.353338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.214 [2024-05-15 00:40:42.353371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.214 [2024-05-15 00:40:42.353390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.214 [2024-05-15 00:40:42.366698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.214 [2024-05-15 00:40:42.366731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.214 [2024-05-15 00:40:42.366750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.471 [2024-05-15 00:40:42.382904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.382945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.382966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.394178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.394207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.394244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.409570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.409604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.409622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.423763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.423796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.423814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.439630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.439664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.439683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.451716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.451750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.451774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.469044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.469073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.469090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.483288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.483322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.483341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.496818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.496850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.496869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.512124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.512153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.512170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.523874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.523909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.523927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.538957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.539006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.539023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.554672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.554705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.554723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.568406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.568440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.568459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.580797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.580837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.580857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.595584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.595617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.595636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.610039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.610069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.610087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.472 [2024-05-15 00:40:42.626058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.472 [2024-05-15 00:40:42.626088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.472 [2024-05-15 00:40:42.626105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.729 [2024-05-15 00:40:42.637857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.729 [2024-05-15 00:40:42.637890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.729 [2024-05-15 00:40:42.637908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.729 [2024-05-15 00:40:42.653145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.729 [2024-05-15 00:40:42.653176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.729 [2024-05-15 00:40:42.653192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.729 [2024-05-15 00:40:42.666114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.729 [2024-05-15 00:40:42.666143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.729 [2024-05-15 00:40:42.666175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.729 [2024-05-15 00:40:42.680397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.729 [2024-05-15 00:40:42.680431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.729 [2024-05-15 00:40:42.680449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.729 [2024-05-15 00:40:42.696512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.729 [2024-05-15 00:40:42.696545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.729 [2024-05-15 00:40:42.696563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.729 [2024-05-15 00:40:42.708462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.729 [2024-05-15 00:40:42.708495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.729 [2024-05-15 00:40:42.708514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.729 [2024-05-15 00:40:42.723424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.729 [2024-05-15 00:40:42.723457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.729 [2024-05-15 00:40:42.723475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.729 [2024-05-15 00:40:42.738057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.729 [2024-05-15 00:40:42.738087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.729 [2024-05-15 00:40:42.738103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.729 [2024-05-15 00:40:42.752020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.729 [2024-05-15 00:40:42.752049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.730 [2024-05-15 00:40:42.752066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.730 [2024-05-15 00:40:42.766426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.730 [2024-05-15 00:40:42.766458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.730 [2024-05-15 00:40:42.766477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.730 [2024-05-15 00:40:42.778397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.730 [2024-05-15 00:40:42.778430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.730 [2024-05-15 00:40:42.778449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.730 [2024-05-15 00:40:42.795600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.730 [2024-05-15 00:40:42.795633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.730 [2024-05-15 00:40:42.795652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.730 [2024-05-15 00:40:42.807390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.730 [2024-05-15 00:40:42.807422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.730 [2024-05-15 00:40:42.807440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.730 [2024-05-15 00:40:42.821636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.730 [2024-05-15 00:40:42.821669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.730 [2024-05-15 00:40:42.821694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.730 [2024-05-15 00:40:42.836896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.730 [2024-05-15 00:40:42.836937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.730 [2024-05-15 00:40:42.836958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.730 [2024-05-15 00:40:42.851302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.730 [2024-05-15 00:40:42.851335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.730 [2024-05-15 00:40:42.851354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.730 [2024-05-15 00:40:42.865246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.730 [2024-05-15 00:40:42.865279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.730 [2024-05-15 00:40:42.865297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.730 [2024-05-15 00:40:42.879127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.730 [2024-05-15 00:40:42.879156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.730 [2024-05-15 00:40:42.879187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:42.893541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:42.893574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:42.893592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:42.906591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:42.906624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:42.906643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:42.920773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:42.920806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:42.920825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:42.934915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:42.934954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:42.934987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:42.949097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:42.949147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:42.949164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:42.962108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:42.962137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:42.962168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:42.975630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:42.975663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:42.975682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:42.990801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:42.990833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:42.990852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:43.002574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:43.002608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:43.002626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:43.018056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:43.018086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:43.018102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:43.032056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:43.032084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:43.032116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:43.046242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:43.046291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:43.046309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:43.059565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:43.059598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:43.059622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:43.073233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:43.073266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:43.073285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:43.088988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:43.089018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:43.089035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:43.102602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:43.102635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:43.102654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:43.115853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:43.115885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:43.115904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:43.131222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:43.131256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:43.131275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.988 [2024-05-15 00:40:43.145994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:16.988 [2024-05-15 00:40:43.146024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.988 [2024-05-15 00:40:43.146041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.246 [2024-05-15 00:40:43.159082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.246 [2024-05-15 00:40:43.159112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.246 [2024-05-15 00:40:43.159128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.246 [2024-05-15 00:40:43.173927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.246 [2024-05-15 00:40:43.173981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.246 [2024-05-15 00:40:43.173999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.246 [2024-05-15 00:40:43.186997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.246 [2024-05-15 00:40:43.187056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.246 [2024-05-15 00:40:43.187073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.200189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.200236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.200254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.215334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.215367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.215386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.230869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.230901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.230920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.244378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.244411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.244430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.258776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.258810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.258828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.271705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.271737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.271755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.286294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.286330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.286349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.300423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.300457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.300476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.313730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.313764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.313783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.327271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.327304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.327322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.342031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.342061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.342094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.354809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.354842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.354860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.369304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.369339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.369358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.383427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.383460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.383479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.247 [2024-05-15 00:40:43.398701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.247 [2024-05-15 00:40:43.398735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.247 [2024-05-15 00:40:43.398753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.507 [2024-05-15 00:40:43.411756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.507 [2024-05-15 00:40:43.411789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.507 [2024-05-15 00:40:43.411807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.507 [2024-05-15 00:40:43.425536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.425569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.425594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.440512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.440545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.440564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.452778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.452811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.452829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.467599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.467632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.467651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.480892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.480925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.480954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.495234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.495267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.495286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.508451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.508484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.508503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.522656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.522689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.522707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.535862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.535895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.535914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.550120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.550156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.550173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.565626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.565659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.565678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.578247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.578280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.578298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.592350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.592384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.592402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.606721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.606762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.606780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.619299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.619332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.619350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.633953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.633995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.634010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.649664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.649697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.649715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.508 [2024-05-15 00:40:43.663094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.508 [2024-05-15 00:40:43.663123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.508 [2024-05-15 00:40:43.663156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.765 [2024-05-15 00:40:43.677660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.765 [2024-05-15 00:40:43.677693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.765 [2024-05-15 00:40:43.677712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.765 [2024-05-15 00:40:43.691866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.765 [2024-05-15 00:40:43.691898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.765 [2024-05-15 00:40:43.691916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.765 [2024-05-15 00:40:43.705450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.765 [2024-05-15 00:40:43.705483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.765 [2024-05-15 00:40:43.705502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.765 [2024-05-15 00:40:43.718275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.765 [2024-05-15 00:40:43.718328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.765 [2024-05-15 00:40:43.718345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.765 [2024-05-15 00:40:43.734062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.765 [2024-05-15 00:40:43.734093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.765 [2024-05-15 00:40:43.734110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.765 [2024-05-15 00:40:43.748022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.765 [2024-05-15 00:40:43.748051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.765 [2024-05-15 00:40:43.748067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.765 [2024-05-15 00:40:43.762301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.766 [2024-05-15 00:40:43.762343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.766 [2024-05-15 00:40:43.762361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.766 [2024-05-15 00:40:43.776211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ed720) 00:25:17.766 [2024-05-15 00:40:43.776247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.766 [2024-05-15 00:40:43.776263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.766 00:25:17.766 Latency(us) 00:25:17.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.766 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:17.766 nvme0n1 : 2.01 18012.92 70.36 0.00 0.00 7094.84 3228.25 19126.80 00:25:17.766 =================================================================================================================== 00:25:17.766 Total : 18012.92 70.36 0.00 0.00 7094.84 3228.25 19126.80 00:25:17.766 0 00:25:17.766 00:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:17.766 00:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:17.766 00:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:17.766 00:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:17.766 | .driver_specific 00:25:17.766 | .nvme_error 00:25:17.766 | .status_code 00:25:17.766 | .command_transient_transport_error' 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 985185 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 985185 ']' 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 985185 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 985185 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 985185' 00:25:18.023 killing process with pid 985185 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 985185 00:25:18.023 Received shutdown signal, test time was about 2.000000 seconds 00:25:18.023 00:25:18.023 Latency(us) 00:25:18.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.023 =================================================================================================================== 00:25:18.023 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.023 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 985185 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=985718 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 985718 /var/tmp/bperf.sock 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 985718 ']' 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:18.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:18.281 00:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.281 [2024-05-15 00:40:44.372188] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:18.281 [2024-05-15 00:40:44.372268] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985718 ] 00:25:18.281 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:18.281 Zero copy mechanism will not be used. 00:25:18.281 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.281 [2024-05-15 00:40:44.444167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.539 [2024-05-15 00:40:44.558630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.471 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:19.471 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:19.471 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:19.471 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:19.472 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:19.472 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.472 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:19.472 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.472 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.472 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:20.037 nvme0n1 00:25:20.037 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:20.037 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.037 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.037 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.037 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:20.037 00:40:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:20.037 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:20.037 Zero copy mechanism will not be used. 00:25:20.037 Running I/O for 2 seconds... 00:25:20.037 [2024-05-15 00:40:46.061891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.037 [2024-05-15 00:40:46.061956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.037 [2024-05-15 00:40:46.061991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.037 [2024-05-15 00:40:46.075384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.037 [2024-05-15 00:40:46.075446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.037 [2024-05-15 00:40:46.075464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.037 [2024-05-15 00:40:46.088578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.037 [2024-05-15 00:40:46.088612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.037 [2024-05-15 00:40:46.088632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.037 [2024-05-15 00:40:46.101894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.037 [2024-05-15 00:40:46.101927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.037 [2024-05-15 00:40:46.101960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.037 [2024-05-15 00:40:46.115003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.037 [2024-05-15 00:40:46.115032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.037 [2024-05-15 00:40:46.115048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.037 [2024-05-15 00:40:46.128064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.037 [2024-05-15 00:40:46.128091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.037 [2024-05-15 00:40:46.128124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.037 [2024-05-15 00:40:46.141616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.037 [2024-05-15 00:40:46.141649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.037 [2024-05-15 00:40:46.141667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.037 [2024-05-15 00:40:46.154815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.037 [2024-05-15 00:40:46.154848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.037 [2024-05-15 00:40:46.154867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.037 [2024-05-15 00:40:46.167703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.037 [2024-05-15 00:40:46.167735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.037 [2024-05-15 00:40:46.167754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.037 [2024-05-15 00:40:46.180844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.037 [2024-05-15 00:40:46.180876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.037 [2024-05-15 00:40:46.180895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.037 [2024-05-15 00:40:46.194266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.037 [2024-05-15 00:40:46.194298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.037 [2024-05-15 00:40:46.194316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.295 [2024-05-15 00:40:46.207833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.295 [2024-05-15 00:40:46.207861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.295 [2024-05-15 00:40:46.207892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.295 [2024-05-15 00:40:46.221049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.295 [2024-05-15 00:40:46.221078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.295 [2024-05-15 00:40:46.221093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.295 [2024-05-15 00:40:46.234020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.295 [2024-05-15 00:40:46.234047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.295 [2024-05-15 00:40:46.234077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.295 [2024-05-15 00:40:46.247316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.295 [2024-05-15 00:40:46.247342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.295 [2024-05-15 00:40:46.247373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.295 [2024-05-15 00:40:46.260280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.295 [2024-05-15 00:40:46.260307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.295 [2024-05-15 00:40:46.260338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.295 [2024-05-15 00:40:46.273490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.295 [2024-05-15 00:40:46.273517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.295 [2024-05-15 00:40:46.273549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.286666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.286693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.286723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.299658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.299690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.299715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.312649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.312681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.312700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.326011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.326038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.326069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.339285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.339312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.339343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.352456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.352485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.352517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.365697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.365730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.365749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.378847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.378881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.378900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.391778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.391820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.391836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.405036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.405063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.405093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.418083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.418118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.418151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.430990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.431017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.431033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.444502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.444529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.444561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.296 [2024-05-15 00:40:46.457734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.296 [2024-05-15 00:40:46.457776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.296 [2024-05-15 00:40:46.457792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.470965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.471009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.471024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.483878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.483910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.483937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.496756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.496783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.496815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.510079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.510106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.510137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.523234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.523261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.523299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.536269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.536296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.536327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.549280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.549308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.549338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.562248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.562274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.562305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.575439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.575467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.575497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.588845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.588877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.588896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.602480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.602507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.602539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.615956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.615999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.616014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.629022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.629048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.629080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.642326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.642363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.642383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.655489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.655516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.655547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.668729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.668761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.668779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.681863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.681894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.681912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.554 [2024-05-15 00:40:46.695450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.554 [2024-05-15 00:40:46.695482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.554 [2024-05-15 00:40:46.695501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.555 [2024-05-15 00:40:46.708853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.555 [2024-05-15 00:40:46.708884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.555 [2024-05-15 00:40:46.708902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.812 [2024-05-15 00:40:46.722665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.812 [2024-05-15 00:40:46.722697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.812 [2024-05-15 00:40:46.722716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.812 [2024-05-15 00:40:46.736136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.812 [2024-05-15 00:40:46.736164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.812 [2024-05-15 00:40:46.736195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.812 [2024-05-15 00:40:46.749753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.812 [2024-05-15 00:40:46.749785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.812 [2024-05-15 00:40:46.749804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.812 [2024-05-15 00:40:46.763047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.812 [2024-05-15 00:40:46.763073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.812 [2024-05-15 00:40:46.763103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.812 [2024-05-15 00:40:46.776877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.812 [2024-05-15 00:40:46.776920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.812 [2024-05-15 00:40:46.776948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.790317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.790343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.790374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.803564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.803606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.803622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.816830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.816861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.816880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.830170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.830198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.830232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.843475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.843502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.843532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.857255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.857282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.857312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.872777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.872825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.872851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.888800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.888835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.888853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.904236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.904268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.904287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.919986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.920015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.920045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.935636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.935670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.935689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.951245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.951275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.951292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:20.813 [2024-05-15 00:40:46.965954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:20.813 [2024-05-15 00:40:46.965998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.813 [2024-05-15 00:40:46.966014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:46.981624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:46.981668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:46.981684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:46.997116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:46.997145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:46.997163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.011572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.011612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.011632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.026299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.026335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.026354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.040101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.040129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.040161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.053105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.053134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.053150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.066302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.066329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.066360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.079465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.079497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.079516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.092897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.092937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.092957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.106235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.106263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.106295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.119576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.119608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.119626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.132918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.132960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.132979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.146113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.146156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.146172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.159221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.159271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.159287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.172277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.172310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.172329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.186352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.186390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.186421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.199623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.199655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.199673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.212798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.212830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.212849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.071 [2024-05-15 00:40:47.226477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.071 [2024-05-15 00:40:47.226510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.071 [2024-05-15 00:40:47.226528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.329 [2024-05-15 00:40:47.239840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.329 [2024-05-15 00:40:47.239872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.329 [2024-05-15 00:40:47.239898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.329 [2024-05-15 00:40:47.253031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.329 [2024-05-15 00:40:47.253058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.329 [2024-05-15 00:40:47.253090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.329 [2024-05-15 00:40:47.265865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.329 [2024-05-15 00:40:47.265897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.329 [2024-05-15 00:40:47.265915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.329 [2024-05-15 00:40:47.279259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.329 [2024-05-15 00:40:47.279291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.329 [2024-05-15 00:40:47.279309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.292664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.292696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.292715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.306099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.306127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.306158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.319390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.319415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.319446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.332701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.332732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.332751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.345919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.345959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.345978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.359406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.359437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.359455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.373034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.373063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.373095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.386575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.386623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.386642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.399573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.399606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.399625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.412598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.412631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.412649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.425728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.425759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.425778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.438957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.439003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.439019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.452027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.452054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.452085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.464917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.464956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.464995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.478061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.478087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.478103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.330 [2024-05-15 00:40:47.491738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.330 [2024-05-15 00:40:47.491770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.330 [2024-05-15 00:40:47.491788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.504844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.504876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.504894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.518248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.518289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.518305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.531656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.531688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.531706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.544854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.544884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.544903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.557937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.557968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.557987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.571211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.571238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.571269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.584356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.584395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.584414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.597911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.597958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.597977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.611307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.611339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.611358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.624882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.624913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.624939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.638149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.638175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.638205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.651768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.651800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.651819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.665071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.665098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.665128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.678230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.678257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.678289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.691250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.691293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.691311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.704281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.704313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.588 [2024-05-15 00:40:47.704331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.588 [2024-05-15 00:40:47.717479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.588 [2024-05-15 00:40:47.717510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.589 [2024-05-15 00:40:47.717528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.589 [2024-05-15 00:40:47.730685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.589 [2024-05-15 00:40:47.730716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.589 [2024-05-15 00:40:47.730735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.589 [2024-05-15 00:40:47.744494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.589 [2024-05-15 00:40:47.744522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.589 [2024-05-15 00:40:47.744553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.846 [2024-05-15 00:40:47.758197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.758239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.758255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.771889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.771920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.771947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.785347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.785379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.785398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.798236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.798278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.798294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.811283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.811315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.811340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.824468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.824500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.824518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.837803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.837834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.837853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.851059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.851086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.851103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.864424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.864450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.864481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.877667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.877698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.877716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.890853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.890883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.890902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.903870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.903901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.903919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.916893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.916924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.916952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.929915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.929954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.929973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.943014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.943041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.943071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.955915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.955954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.955974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.968976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.969019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.969034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.981955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.982000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.982016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:47.994915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:47.994954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:47.994973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:21.847 [2024-05-15 00:40:48.008093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:21.847 [2024-05-15 00:40:48.008122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.847 [2024-05-15 00:40:48.008138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.105 [2024-05-15 00:40:48.021618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:22.105 [2024-05-15 00:40:48.021650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.105 [2024-05-15 00:40:48.021668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.105 [2024-05-15 00:40:48.034635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:22.105 [2024-05-15 00:40:48.034662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.105 [2024-05-15 00:40:48.034699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.105 [2024-05-15 00:40:48.047753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77e1c0) 00:25:22.105 [2024-05-15 00:40:48.047780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.105 [2024-05-15 00:40:48.047811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.105 00:25:22.105 Latency(us) 00:25:22.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.105 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:22.105 nvme0n1 : 2.00 2311.90 288.99 0.00 0.00 6915.98 6213.78 16214.09 00:25:22.105 =================================================================================================================== 00:25:22.105 Total : 2311.90 288.99 0.00 0.00 6915.98 6213.78 16214.09 00:25:22.105 0 00:25:22.105 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:22.105 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:22.105 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:22.105 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:22.105 | .driver_specific 00:25:22.105 | .nvme_error 00:25:22.105 | .status_code 00:25:22.105 | .command_transient_transport_error' 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 985718 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 985718 ']' 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 985718 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 985718 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 985718' 00:25:22.363 killing process with pid 985718 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 985718 00:25:22.363 Received shutdown signal, test time was about 2.000000 seconds 00:25:22.363 00:25:22.363 Latency(us) 00:25:22.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.363 =================================================================================================================== 00:25:22.363 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:22.363 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 985718 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=986258 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 986258 /var/tmp/bperf.sock 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 986258 ']' 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:22.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:22.621 00:40:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:22.621 [2024-05-15 00:40:48.688482] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:22.621 [2024-05-15 00:40:48.688562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986258 ] 00:25:22.621 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.621 [2024-05-15 00:40:48.760056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.879 [2024-05-15 00:40:48.874675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.811 00:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:23.811 00:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:23.811 00:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:23.811 00:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:23.811 00:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:23.811 00:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.811 00:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:23.811 00:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.811 00:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.811 00:40:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:24.376 nvme0n1 00:25:24.376 00:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:24.377 00:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.377 00:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.377 00:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.377 00:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:24.377 00:40:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:24.377 Running I/O for 2 seconds... 00:25:24.377 [2024-05-15 00:40:50.449909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.377 [2024-05-15 00:40:50.450275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.377 [2024-05-15 00:40:50.450311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.377 [2024-05-15 00:40:50.464307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.377 [2024-05-15 00:40:50.464667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.377 [2024-05-15 00:40:50.464701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.377 [2024-05-15 00:40:50.478794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.377 [2024-05-15 00:40:50.479125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.377 [2024-05-15 00:40:50.479152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.377 [2024-05-15 00:40:50.493387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.377 [2024-05-15 00:40:50.493707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.377 [2024-05-15 00:40:50.493734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.377 [2024-05-15 00:40:50.508087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.377 [2024-05-15 00:40:50.508410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.377 [2024-05-15 00:40:50.508441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.377 [2024-05-15 00:40:50.522696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.377 [2024-05-15 00:40:50.523006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.377 [2024-05-15 00:40:50.523033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.377 [2024-05-15 00:40:50.536974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.377 [2024-05-15 00:40:50.537278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.377 [2024-05-15 00:40:50.537320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.551110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.551449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.551480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.565440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.565732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.565770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.579883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.580168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.580210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.594135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.594462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.594492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.608469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.608810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.608841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.622941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.623239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.623279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.637213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.637557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.637588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.651567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.651911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.651956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.665566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.665895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.665927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.678765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.679054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.679082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.692117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.692441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.692477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.705405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.705728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.705757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.718415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.718705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.718735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.731473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.731732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.731759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.744485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.635 [2024-05-15 00:40:50.744745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.635 [2024-05-15 00:40:50.744772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.635 [2024-05-15 00:40:50.757616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.636 [2024-05-15 00:40:50.757890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.636 [2024-05-15 00:40:50.757916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.636 [2024-05-15 00:40:50.770855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.636 [2024-05-15 00:40:50.771125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.636 [2024-05-15 00:40:50.771152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.636 [2024-05-15 00:40:50.784107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.636 [2024-05-15 00:40:50.784400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.636 [2024-05-15 00:40:50.784430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.636 [2024-05-15 00:40:50.797351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.636 [2024-05-15 00:40:50.797613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.636 [2024-05-15 00:40:50.797639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.893 [2024-05-15 00:40:50.810438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.893 [2024-05-15 00:40:50.810770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.893 [2024-05-15 00:40:50.810800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.893 [2024-05-15 00:40:50.823620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.893 [2024-05-15 00:40:50.823949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.893 [2024-05-15 00:40:50.823992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.893 [2024-05-15 00:40:50.836904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.893 [2024-05-15 00:40:50.837191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.837218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.850036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.850312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.850338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.863102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.863398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.863427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.876543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.876831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.876860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.889690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.889968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.889995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.902920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.903187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.903230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.916136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.916426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.916457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.929341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.929616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.929643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.942598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.942873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.942900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.955782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.956048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.956076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.968751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.969053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.969080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.981925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.982216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.982257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:50.995153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:50.995428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:50.995455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:51.008255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:51.008515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:51.008556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:51.021414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:51.021673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:51.021700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:51.034319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:51.034580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:51.034628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:24.894 [2024-05-15 00:40:51.047391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:24.894 [2024-05-15 00:40:51.047651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.894 [2024-05-15 00:40:51.047679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.060152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.060414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.060442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.073207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.073468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.073495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.086275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.086537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.086564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.099483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.099774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.099804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.112471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.112729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.112770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.125499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.125819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.125849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.138579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.138901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.138938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.151842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.152145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.152173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.164982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.165239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.165266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.177716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.177979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.178005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.190372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.190630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.190657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.203358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.203680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.203710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.216173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.216434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.216461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.228916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.229205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.229231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.241692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.241954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.241982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.254336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.254600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.254627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.267397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.267656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.267683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.280160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.280421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.280447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.152 [2024-05-15 00:40:51.293304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.152 [2024-05-15 00:40:51.293563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.152 [2024-05-15 00:40:51.293590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.153 [2024-05-15 00:40:51.306211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.153 [2024-05-15 00:40:51.306526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.153 [2024-05-15 00:40:51.306556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.319177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.319438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.319465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.331991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.332250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.332291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.345310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.345577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.345605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.358248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.358507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.358535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.371306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.371566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.371595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.384240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.384526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.384557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.397467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.397792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.397822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.410331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.410595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.410636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.423399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.423719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.423749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.436037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.436302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.436329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.449130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.449417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.449446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.461812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.462131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.462157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.475012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.475275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.411 [2024-05-15 00:40:51.475303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.411 [2024-05-15 00:40:51.487630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.411 [2024-05-15 00:40:51.487978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.412 [2024-05-15 00:40:51.488012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.412 [2024-05-15 00:40:51.500779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.412 [2024-05-15 00:40:51.501056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.412 [2024-05-15 00:40:51.501084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.412 [2024-05-15 00:40:51.513713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.412 [2024-05-15 00:40:51.513975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.412 [2024-05-15 00:40:51.514003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.412 [2024-05-15 00:40:51.526544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.412 [2024-05-15 00:40:51.526806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.412 [2024-05-15 00:40:51.526833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.412 [2024-05-15 00:40:51.539504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.412 [2024-05-15 00:40:51.539824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.412 [2024-05-15 00:40:51.539855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.412 [2024-05-15 00:40:51.552236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.412 [2024-05-15 00:40:51.552496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.412 [2024-05-15 00:40:51.552523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.412 [2024-05-15 00:40:51.565031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.412 [2024-05-15 00:40:51.565299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.412 [2024-05-15 00:40:51.565326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.578089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.578350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.578377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.590942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.591234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.591261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.603804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.604083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.604110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.616441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.616702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.616728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.629631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.629891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.629919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.642701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.643032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.643061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.655782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.656103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.656130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.668957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.669238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.669266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.682248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.682590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.682620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.696306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.696623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.696653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.710070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.710389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.710419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.723858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.724175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.724205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.737577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.737897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.737927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.751342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.670 [2024-05-15 00:40:51.751658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.670 [2024-05-15 00:40:51.751688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.670 [2024-05-15 00:40:51.765072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.671 [2024-05-15 00:40:51.765396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.671 [2024-05-15 00:40:51.765426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.671 [2024-05-15 00:40:51.778763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.671 [2024-05-15 00:40:51.779064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.671 [2024-05-15 00:40:51.779095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.671 [2024-05-15 00:40:51.792439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.671 [2024-05-15 00:40:51.792760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.671 [2024-05-15 00:40:51.792789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.671 [2024-05-15 00:40:51.806133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.671 [2024-05-15 00:40:51.806449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.671 [2024-05-15 00:40:51.806478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.671 [2024-05-15 00:40:51.819798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.671 [2024-05-15 00:40:51.820096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.671 [2024-05-15 00:40:51.820126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.833538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.833837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.833873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.847284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.847602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.847631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.860957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.861245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.861275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.874615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.874913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.874950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.888515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.888806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.888835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.902207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.902497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.902527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.915867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.916164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.916194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.929551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.929844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.929874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.943224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.943548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.943578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.956882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.957186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.957216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.970587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.970907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.970943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.984248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.984570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.984599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:51.997903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:51.998201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:51.998231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:52.011636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:52.011924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:52.011964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:52.025294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:52.025611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:52.025640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:52.038944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:52.039235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:52.039265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:52.052638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.929 [2024-05-15 00:40:52.052958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.929 [2024-05-15 00:40:52.052987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.929 [2024-05-15 00:40:52.066301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.930 [2024-05-15 00:40:52.066614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.930 [2024-05-15 00:40:52.066643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:25.930 [2024-05-15 00:40:52.079962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:25.930 [2024-05-15 00:40:52.080259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:25.930 [2024-05-15 00:40:52.080288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.187 [2024-05-15 00:40:52.093707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.094019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.094049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.107417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.107736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.107765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.121052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.121343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.121371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.134782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.135081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.135111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.148458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.148745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.148774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.162133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.162453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.162482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.175822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.176119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.176149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.189486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.189772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.189802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.203155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.203471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.203500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.216823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.217128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.217157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.230491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.230812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.230841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.244181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.244500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.244530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.257817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.258116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.258146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.271576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.271877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.271905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.285238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.285553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.285582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.298962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.299257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.299286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.312640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.312928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.312969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.326296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.326617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.326646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.188 [2024-05-15 00:40:52.339981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.188 [2024-05-15 00:40:52.340308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.188 [2024-05-15 00:40:52.340338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.474 [2024-05-15 00:40:52.353721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.474 [2024-05-15 00:40:52.354034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.475 [2024-05-15 00:40:52.354077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.475 [2024-05-15 00:40:52.368167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.475 [2024-05-15 00:40:52.368493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.475 [2024-05-15 00:40:52.368524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.475 [2024-05-15 00:40:52.381876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.475 [2024-05-15 00:40:52.382178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.475 [2024-05-15 00:40:52.382209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.475 [2024-05-15 00:40:52.395598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.475 [2024-05-15 00:40:52.395927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.475 [2024-05-15 00:40:52.395972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.475 [2024-05-15 00:40:52.409300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.475 [2024-05-15 00:40:52.409620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.475 [2024-05-15 00:40:52.409649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.475 [2024-05-15 00:40:52.423027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.475 [2024-05-15 00:40:52.423324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.475 [2024-05-15 00:40:52.423353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.475 [2024-05-15 00:40:52.436701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f250) with pdu=0x2000190fc998 00:25:26.475 [2024-05-15 00:40:52.437007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:26.475 [2024-05-15 00:40:52.437038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:26.475 00:25:26.475 Latency(us) 00:25:26.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.475 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:26.475 nvme0n1 : 2.01 18958.67 74.06 0.00 0.00 6734.94 6092.42 14951.92 00:25:26.475 =================================================================================================================== 00:25:26.475 Total : 18958.67 74.06 0.00 0.00 6734.94 6092.42 14951.92 00:25:26.475 0 00:25:26.475 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:26.475 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:26.475 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:26.475 | .driver_specific 00:25:26.475 | .nvme_error 00:25:26.475 | .status_code 00:25:26.475 | .command_transient_transport_error' 00:25:26.475 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 986258 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 986258 ']' 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 986258 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 986258 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 986258' 00:25:26.732 killing process with pid 986258 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 986258 00:25:26.732 Received shutdown signal, test time was about 2.000000 seconds 00:25:26.732 00:25:26.732 Latency(us) 00:25:26.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.732 =================================================================================================================== 00:25:26.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:26.732 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 986258 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=986796 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 986796 /var/tmp/bperf.sock 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 986796 ']' 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:26.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:26.990 00:40:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:26.990 [2024-05-15 00:40:53.034313] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:26.990 [2024-05-15 00:40:53.034398] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986796 ] 00:25:26.990 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:26.990 Zero copy mechanism will not be used. 00:25:26.990 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.990 [2024-05-15 00:40:53.107187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.248 [2024-05-15 00:40:53.227890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.248 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:27.248 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:25:27.248 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:27.248 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:27.504 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:27.504 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.504 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:27.504 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.504 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:27.504 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:27.762 nvme0n1 00:25:28.020 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:28.020 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.020 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.020 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:28.020 00:40:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:28.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:28.020 Zero copy mechanism will not be used. 00:25:28.020 Running I/O for 2 seconds... 00:25:28.020 [2024-05-15 00:40:54.066305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.020 [2024-05-15 00:40:54.068493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.020 [2024-05-15 00:40:54.068541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.020 [2024-05-15 00:40:54.088314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.020 [2024-05-15 00:40:54.091038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.020 [2024-05-15 00:40:54.091068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.020 [2024-05-15 00:40:54.111560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.020 [2024-05-15 00:40:54.113197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.020 [2024-05-15 00:40:54.113245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.020 [2024-05-15 00:40:54.131904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.020 [2024-05-15 00:40:54.133670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.020 [2024-05-15 00:40:54.134888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.020 [2024-05-15 00:40:54.150541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.020 [2024-05-15 00:40:54.152905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.020 [2024-05-15 00:40:54.152946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.020 [2024-05-15 00:40:54.168711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.020 [2024-05-15 00:40:54.170321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.020 [2024-05-15 00:40:54.170356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 00:40:54.187558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.277 [2024-05-15 00:40:54.188925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.277 [2024-05-15 00:40:54.188979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 00:40:54.206905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.277 [2024-05-15 00:40:54.208783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.277 [2024-05-15 00:40:54.208818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 00:40:54.226756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.277 [2024-05-15 00:40:54.229281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.277 [2024-05-15 00:40:54.229314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 00:40:54.246080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.277 [2024-05-15 00:40:54.246976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.277 [2024-05-15 00:40:54.247006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 00:40:54.267331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.277 [2024-05-15 00:40:54.269194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.277 [2024-05-15 00:40:54.269335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 00:40:54.287088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.277 [2024-05-15 00:40:54.288831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.277 [2024-05-15 00:40:54.288866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 00:40:54.307182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.277 [2024-05-15 00:40:54.309031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.277 [2024-05-15 00:40:54.309059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 00:40:54.329272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.277 [2024-05-15 00:40:54.331098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.277 [2024-05-15 00:40:54.331131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.277 [2024-05-15 00:40:54.349898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.278 [2024-05-15 00:40:54.352066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.278 [2024-05-15 00:40:54.352096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 00:40:54.372187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.278 [2024-05-15 00:40:54.374336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.278 [2024-05-15 00:40:54.374371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 00:40:54.391868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.278 [2024-05-15 00:40:54.393381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.278 [2024-05-15 00:40:54.393417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 00:40:54.411839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.278 [2024-05-15 00:40:54.413238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.278 [2024-05-15 00:40:54.413297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.278 [2024-05-15 00:40:54.429870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.278 [2024-05-15 00:40:54.432843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.278 [2024-05-15 00:40:54.432877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.449340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.450774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.450809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.468854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.471335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.471368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.487108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.488661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.489792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.506009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.507631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.509037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.524576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.525641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.525672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.545098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.548021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.548055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.564092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.565503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.565538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.584500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.586462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.586497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.603147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.605785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.605819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.622380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.624874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.624908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.642082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.645429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.645464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.663727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.666362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.666397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.536 [2024-05-15 00:40:54.684486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.536 [2024-05-15 00:40:54.687320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.536 [2024-05-15 00:40:54.687355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.704019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.706662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.794 [2024-05-15 00:40:54.706697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.724855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.726632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.794 [2024-05-15 00:40:54.726667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.744503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.746056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.794 [2024-05-15 00:40:54.746088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.762629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.764676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.794 [2024-05-15 00:40:54.764710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.782685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.785572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.794 [2024-05-15 00:40:54.785607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.801980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.803882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.794 [2024-05-15 00:40:54.803917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.821143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.822791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.794 [2024-05-15 00:40:54.822825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.840241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.841996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.794 [2024-05-15 00:40:54.842029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.858939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.862381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.794 [2024-05-15 00:40:54.862415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.878523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.881863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.794 [2024-05-15 00:40:54.881897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.898327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.900491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.794 [2024-05-15 00:40:54.901815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.794 [2024-05-15 00:40:54.918289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.794 [2024-05-15 00:40:54.921582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.795 [2024-05-15 00:40:54.921632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.795 [2024-05-15 00:40:54.939426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:28.795 [2024-05-15 00:40:54.942457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.795 [2024-05-15 00:40:54.942491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.052 [2024-05-15 00:40:54.960188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.052 [2024-05-15 00:40:54.963460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.052 [2024-05-15 00:40:54.963496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.052 [2024-05-15 00:40:54.980112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.052 [2024-05-15 00:40:54.983728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.052 [2024-05-15 00:40:54.983763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.052 [2024-05-15 00:40:54.999429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.052 [2024-05-15 00:40:55.003023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.052 [2024-05-15 00:40:55.003059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.052 [2024-05-15 00:40:55.019922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.052 [2024-05-15 00:40:55.021552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.052 [2024-05-15 00:40:55.021585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.052 [2024-05-15 00:40:55.042267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.053 [2024-05-15 00:40:55.045468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.053 [2024-05-15 00:40:55.045503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.053 [2024-05-15 00:40:55.063030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.053 [2024-05-15 00:40:55.066360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.053 [2024-05-15 00:40:55.066394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.053 [2024-05-15 00:40:55.082092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.053 [2024-05-15 00:40:55.085417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.053 [2024-05-15 00:40:55.085452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.053 [2024-05-15 00:40:55.104442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.053 [2024-05-15 00:40:55.106036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.053 [2024-05-15 00:40:55.107103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.053 [2024-05-15 00:40:55.123916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.053 [2024-05-15 00:40:55.124699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.053 [2024-05-15 00:40:55.124733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.053 [2024-05-15 00:40:55.145693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.053 [2024-05-15 00:40:55.148077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.053 [2024-05-15 00:40:55.148111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.053 [2024-05-15 00:40:55.165817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.053 [2024-05-15 00:40:55.167822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.053 [2024-05-15 00:40:55.167857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.053 [2024-05-15 00:40:55.184703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.053 [2024-05-15 00:40:55.186581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.053 [2024-05-15 00:40:55.186615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.053 [2024-05-15 00:40:55.204438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.053 [2024-05-15 00:40:55.206429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.053 [2024-05-15 00:40:55.206463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.310 [2024-05-15 00:40:55.225531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.310 [2024-05-15 00:40:55.227421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.310 [2024-05-15 00:40:55.227457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.310 [2024-05-15 00:40:55.246686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.310 [2024-05-15 00:40:55.248626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.310 [2024-05-15 00:40:55.248661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.310 [2024-05-15 00:40:55.268043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.310 [2024-05-15 00:40:55.269470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.310 [2024-05-15 00:40:55.269504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.310 [2024-05-15 00:40:55.286608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.310 [2024-05-15 00:40:55.288506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.311 [2024-05-15 00:40:55.288541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.311 [2024-05-15 00:40:55.307183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.311 [2024-05-15 00:40:55.310096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.311 [2024-05-15 00:40:55.310130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.311 [2024-05-15 00:40:55.326008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.311 [2024-05-15 00:40:55.327776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.311 [2024-05-15 00:40:55.329116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.311 [2024-05-15 00:40:55.345790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.311 [2024-05-15 00:40:55.348765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.311 [2024-05-15 00:40:55.348800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.311 [2024-05-15 00:40:55.364290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.311 [2024-05-15 00:40:55.367003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.311 [2024-05-15 00:40:55.367039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.311 [2024-05-15 00:40:55.382719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.311 [2024-05-15 00:40:55.384244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.311 [2024-05-15 00:40:55.386058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.311 [2024-05-15 00:40:55.403653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.311 [2024-05-15 00:40:55.406992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.311 [2024-05-15 00:40:55.407027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.311 [2024-05-15 00:40:55.423605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.311 [2024-05-15 00:40:55.425397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.311 [2024-05-15 00:40:55.426898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.311 [2024-05-15 00:40:55.445146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.311 [2024-05-15 00:40:55.448701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.311 [2024-05-15 00:40:55.448746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.311 [2024-05-15 00:40:55.466923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.311 [2024-05-15 00:40:55.470522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.311 [2024-05-15 00:40:55.470557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.487127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.490453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.568 [2024-05-15 00:40:55.490488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.507864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.508766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.568 [2024-05-15 00:40:55.508801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.528030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.531305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.568 [2024-05-15 00:40:55.531339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.548072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.550025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.568 [2024-05-15 00:40:55.551249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.567697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.570425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.568 [2024-05-15 00:40:55.571298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.587209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.590751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.568 [2024-05-15 00:40:55.590785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.607842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.609013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.568 [2024-05-15 00:40:55.609048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.626819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.628347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.568 [2024-05-15 00:40:55.628383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.646602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.648153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.568 [2024-05-15 00:40:55.649300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.666024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.668355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.568 [2024-05-15 00:40:55.668389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.686634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.689825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.568 [2024-05-15 00:40:55.689858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.568 [2024-05-15 00:40:55.708595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.568 [2024-05-15 00:40:55.710111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.569 [2024-05-15 00:40:55.710145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.569 [2024-05-15 00:40:55.731032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.825 [2024-05-15 00:40:55.732524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.825 [2024-05-15 00:40:55.733858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.825 [2024-05-15 00:40:55.751475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.825 [2024-05-15 00:40:55.754649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.825 [2024-05-15 00:40:55.754682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.825 [2024-05-15 00:40:55.772520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.825 [2024-05-15 00:40:55.776226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.825 [2024-05-15 00:40:55.776260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.825 [2024-05-15 00:40:55.793547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.825 [2024-05-15 00:40:55.795236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.825 [2024-05-15 00:40:55.796549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.825 [2024-05-15 00:40:55.813207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.825 [2024-05-15 00:40:55.816088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.825 [2024-05-15 00:40:55.816122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.825 [2024-05-15 00:40:55.831753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.825 [2024-05-15 00:40:55.834718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.825 [2024-05-15 00:40:55.834752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.825 [2024-05-15 00:40:55.853009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.825 [2024-05-15 00:40:55.856281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.825 [2024-05-15 00:40:55.856314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.825 [2024-05-15 00:40:55.875398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.825 [2024-05-15 00:40:55.879043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.825 [2024-05-15 00:40:55.879077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.825 [2024-05-15 00:40:55.895785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.825 [2024-05-15 00:40:55.898800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.825 [2024-05-15 00:40:55.898834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.825 [2024-05-15 00:40:55.916368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.826 [2024-05-15 00:40:55.917846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.826 [2024-05-15 00:40:55.917881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:29.826 [2024-05-15 00:40:55.936121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.826 [2024-05-15 00:40:55.939732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.826 [2024-05-15 00:40:55.939765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:29.826 [2024-05-15 00:40:55.957016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.826 [2024-05-15 00:40:55.958790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.826 [2024-05-15 00:40:55.958825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:29.826 [2024-05-15 00:40:55.975977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:29.826 [2024-05-15 00:40:55.978233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.826 [2024-05-15 00:40:55.979393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.082 [2024-05-15 00:40:55.995278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:30.082 [2024-05-15 00:40:55.995655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.082 [2024-05-15 00:40:55.997081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:30.082 [2024-05-15 00:40:56.015224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:30.082 [2024-05-15 00:40:56.017384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.082 [2024-05-15 00:40:56.017418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:30.082 [2024-05-15 00:40:56.035069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:30.082 [2024-05-15 00:40:56.036831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.083 [2024-05-15 00:40:56.036865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:30.083 [2024-05-15 00:40:56.056219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x145f730) with pdu=0x2000190fef90 00:25:30.083 [2024-05-15 00:40:56.058341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.083 [2024-05-15 00:40:56.058375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:30.083 00:25:30.083 Latency(us) 00:25:30.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.083 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:30.083 nvme0n1 : 2.01 1541.50 192.69 0.00 0.00 10260.61 5509.88 25243.50 00:25:30.083 =================================================================================================================== 00:25:30.083 Total : 1541.50 192.69 0.00 0.00 10260.61 5509.88 25243.50 00:25:30.083 0 00:25:30.083 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:30.083 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:30.083 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:30.083 | .driver_specific 00:25:30.083 | .nvme_error 00:25:30.083 | .status_code 00:25:30.083 | .command_transient_transport_error' 00:25:30.083 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:30.339 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 100 > 0 )) 00:25:30.339 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 986796 00:25:30.339 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 986796 ']' 00:25:30.339 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 986796 00:25:30.339 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:25:30.339 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:30.339 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 986796 00:25:30.339 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:30.339 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:30.339 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 986796' 00:25:30.339 killing process with pid 986796 00:25:30.339 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 986796 00:25:30.339 Received shutdown signal, test time was about 2.000000 seconds 00:25:30.339 00:25:30.339 Latency(us) 00:25:30.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.340 =================================================================================================================== 00:25:30.340 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.340 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 986796 00:25:30.596 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 985033 00:25:30.596 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 985033 ']' 00:25:30.596 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 985033 00:25:30.596 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:25:30.596 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:30.596 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 985033 00:25:30.596 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:30.596 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:30.596 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 985033' 00:25:30.596 killing process with pid 985033 00:25:30.596 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 985033 00:25:30.596 [2024-05-15 00:40:56.706066] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:30.596 00:40:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 985033 00:25:30.853 00:25:30.853 real 0m18.224s 00:25:30.853 user 0m34.882s 00:25:30.853 sys 0m3.943s 00:25:30.853 00:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:30.853 00:40:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:30.853 ************************************ 00:25:30.853 END TEST nvmf_digest_error 00:25:30.853 ************************************ 00:25:31.110 00:40:57 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:31.110 00:40:57 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:31.110 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:31.110 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:31.110 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:31.110 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:31.110 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:31.111 rmmod nvme_tcp 00:25:31.111 rmmod nvme_fabrics 00:25:31.111 rmmod nvme_keyring 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 985033 ']' 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 985033 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@947 -- # '[' -z 985033 ']' 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@951 -- # kill -0 985033 00:25:31.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (985033) - No such process 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@974 -- # echo 'Process with pid 985033 is not found' 00:25:31.111 Process with pid 985033 is not found 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.111 00:40:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.014 00:40:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.014 00:25:33.014 real 0m41.037s 00:25:33.014 user 1m12.830s 00:25:33.014 sys 0m9.702s 00:25:33.014 00:40:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:33.014 00:40:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:33.014 ************************************ 00:25:33.014 END TEST nvmf_digest 00:25:33.014 ************************************ 00:25:33.014 00:40:59 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:25:33.014 00:40:59 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:25:33.014 00:40:59 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:25:33.014 00:40:59 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:33.014 00:40:59 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:33.014 00:40:59 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:33.014 00:40:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.272 ************************************ 00:25:33.272 START TEST nvmf_bdevperf 00:25:33.272 ************************************ 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:33.272 * Looking for test storage... 00:25:33.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.272 00:40:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:35.798 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:35.798 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:35.798 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:35.798 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:35.799 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:35.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:25:35.799 00:25:35.799 --- 10.0.0.2 ping statistics --- 00:25:35.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.799 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:35.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:25:35.799 00:25:35.799 --- 10.0.0.1 ping statistics --- 00:25:35.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.799 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=989440 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 989440 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 989440 ']' 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:35.799 00:41:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.799 [2024-05-15 00:41:01.855672] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:35.799 [2024-05-15 00:41:01.855761] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.799 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.799 [2024-05-15 00:41:01.933658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:36.057 [2024-05-15 00:41:02.043723] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.057 [2024-05-15 00:41:02.043799] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.057 [2024-05-15 00:41:02.043814] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.057 [2024-05-15 00:41:02.043824] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.057 [2024-05-15 00:41:02.043848] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.057 [2024-05-15 00:41:02.043974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.057 [2024-05-15 00:41:02.044019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.057 [2024-05-15 00:41:02.044021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:36.057 [2024-05-15 00:41:02.194707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.057 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:36.315 Malloc0 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.315 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:36.315 [2024-05-15 00:41:02.258043] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:36.315 [2024-05-15 00:41:02.258369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.316 00:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.316 00:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:36.316 00:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:36.316 00:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:36.316 00:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:36.316 00:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:36.316 00:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:36.316 { 00:25:36.316 "params": { 00:25:36.316 "name": "Nvme$subsystem", 00:25:36.316 "trtype": "$TEST_TRANSPORT", 00:25:36.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.316 "adrfam": "ipv4", 00:25:36.316 "trsvcid": "$NVMF_PORT", 00:25:36.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.316 "hdgst": ${hdgst:-false}, 00:25:36.316 "ddgst": ${ddgst:-false} 00:25:36.316 }, 00:25:36.316 "method": "bdev_nvme_attach_controller" 00:25:36.316 } 00:25:36.316 EOF 00:25:36.316 )") 00:25:36.316 00:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:36.316 00:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:36.316 00:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:36.316 00:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:36.316 "params": { 00:25:36.316 "name": "Nvme1", 00:25:36.316 "trtype": "tcp", 00:25:36.316 "traddr": "10.0.0.2", 00:25:36.316 "adrfam": "ipv4", 00:25:36.316 "trsvcid": "4420", 00:25:36.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:36.316 "hdgst": false, 00:25:36.316 "ddgst": false 00:25:36.316 }, 00:25:36.316 "method": "bdev_nvme_attach_controller" 00:25:36.316 }' 00:25:36.316 [2024-05-15 00:41:02.305365] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:36.316 [2024-05-15 00:41:02.305432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989579 ] 00:25:36.316 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.316 [2024-05-15 00:41:02.375811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.574 [2024-05-15 00:41:02.489832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.831 Running I/O for 1 seconds... 00:25:37.763 00:25:37.763 Latency(us) 00:25:37.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.763 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:37.763 Verification LBA range: start 0x0 length 0x4000 00:25:37.763 Nvme1n1 : 1.01 8723.48 34.08 0.00 0.00 14604.98 2718.53 17087.91 00:25:37.763 =================================================================================================================== 00:25:37.763 Total : 8723.48 34.08 0.00 0.00 14604.98 2718.53 17087.91 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=989745 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:38.021 { 00:25:38.021 "params": { 00:25:38.021 "name": "Nvme$subsystem", 00:25:38.021 "trtype": "$TEST_TRANSPORT", 00:25:38.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.021 "adrfam": "ipv4", 00:25:38.021 "trsvcid": "$NVMF_PORT", 00:25:38.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.021 "hdgst": ${hdgst:-false}, 00:25:38.021 "ddgst": ${ddgst:-false} 00:25:38.021 }, 00:25:38.021 "method": "bdev_nvme_attach_controller" 00:25:38.021 } 00:25:38.021 EOF 00:25:38.021 )") 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:38.021 00:41:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:38.021 "params": { 00:25:38.021 "name": "Nvme1", 00:25:38.021 "trtype": "tcp", 00:25:38.021 "traddr": "10.0.0.2", 00:25:38.021 "adrfam": "ipv4", 00:25:38.021 "trsvcid": "4420", 00:25:38.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:38.021 "hdgst": false, 00:25:38.021 "ddgst": false 00:25:38.021 }, 00:25:38.021 "method": "bdev_nvme_attach_controller" 00:25:38.021 }' 00:25:38.021 [2024-05-15 00:41:04.163991] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:38.021 [2024-05-15 00:41:04.164074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989745 ] 00:25:38.279 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.279 [2024-05-15 00:41:04.239144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.279 [2024-05-15 00:41:04.350164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.537 Running I/O for 15 seconds... 00:25:41.068 00:41:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 989440 00:25:41.068 00:41:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:41.068 [2024-05-15 00:41:07.130717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.068 [2024-05-15 00:41:07.130768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.068 [2024-05-15 00:41:07.130800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.068 [2024-05-15 00:41:07.130820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.068 [2024-05-15 00:41:07.130839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.068 [2024-05-15 00:41:07.130856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.068 [2024-05-15 00:41:07.130873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.068 [2024-05-15 00:41:07.130889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.068 [2024-05-15 00:41:07.130907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.068 [2024-05-15 00:41:07.130937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.068 [2024-05-15 00:41:07.130975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.068 [2024-05-15 00:41:07.130992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.068 [2024-05-15 00:41:07.131010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.068 [2024-05-15 00:41:07.131025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.068 [2024-05-15 00:41:07.131041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.068 [2024-05-15 00:41:07.131057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.068 [2024-05-15 00:41:07.131072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.068 [2024-05-15 00:41:07.131086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.068 [2024-05-15 00:41:07.131102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.069 [2024-05-15 00:41:07.131331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.069 [2024-05-15 00:41:07.131363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.131964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.131996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.069 [2024-05-15 00:41:07.132433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.069 [2024-05-15 00:41:07.132450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.132966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.132999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.133013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.133042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.133071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.133099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.133129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.133158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.133186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.133231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.070 [2024-05-15 00:41:07.133269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.070 [2024-05-15 00:41:07.133706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.070 [2024-05-15 00:41:07.133721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.133738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.133753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.133770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.071 [2024-05-15 00:41:07.133785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.133802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.071 [2024-05-15 00:41:07.133817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.133834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.071 [2024-05-15 00:41:07.133849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.133865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.071 [2024-05-15 00:41:07.133881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.133897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.071 [2024-05-15 00:41:07.133912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.133934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.071 [2024-05-15 00:41:07.133952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.133968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.133999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.134951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.134985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.135000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.135016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.071 [2024-05-15 00:41:07.135030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.071 [2024-05-15 00:41:07.135045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d6770 is same with the state(5) to be set 00:25:41.071 [2024-05-15 00:41:07.135063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.071 [2024-05-15 00:41:07.135075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.072 [2024-05-15 00:41:07.135087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45096 len:8 PRP1 0x0 PRP2 0x0 00:25:41.072 [2024-05-15 00:41:07.135100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.072 [2024-05-15 00:41:07.135161] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16d6770 was disconnected and freed. reset controller. 00:25:41.072 [2024-05-15 00:41:07.135256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.072 [2024-05-15 00:41:07.135280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.072 [2024-05-15 00:41:07.135297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.072 [2024-05-15 00:41:07.135427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.072 [2024-05-15 00:41:07.135444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.072 [2024-05-15 00:41:07.135458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.072 [2024-05-15 00:41:07.135471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.072 [2024-05-15 00:41:07.135484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.072 [2024-05-15 00:41:07.135497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.072 [2024-05-15 00:41:07.139371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.072 [2024-05-15 00:41:07.139412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.072 [2024-05-15 00:41:07.140599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.140871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.140944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.072 [2024-05-15 00:41:07.140966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.072 [2024-05-15 00:41:07.141220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.072 [2024-05-15 00:41:07.141482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.072 [2024-05-15 00:41:07.141507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.072 [2024-05-15 00:41:07.141527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.072 [2024-05-15 00:41:07.145153] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.072 [2024-05-15 00:41:07.153680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.072 [2024-05-15 00:41:07.154202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.154409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.154435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.072 [2024-05-15 00:41:07.154451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.072 [2024-05-15 00:41:07.154720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.072 [2024-05-15 00:41:07.154997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.072 [2024-05-15 00:41:07.155020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.072 [2024-05-15 00:41:07.155034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.072 [2024-05-15 00:41:07.158615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.072 [2024-05-15 00:41:07.167597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.072 [2024-05-15 00:41:07.168048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.168300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.168326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.072 [2024-05-15 00:41:07.168342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.072 [2024-05-15 00:41:07.168611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.072 [2024-05-15 00:41:07.168857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.072 [2024-05-15 00:41:07.168881] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.072 [2024-05-15 00:41:07.168896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.072 [2024-05-15 00:41:07.172535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.072 [2024-05-15 00:41:07.181518] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.072 [2024-05-15 00:41:07.181991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.182229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.182258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.072 [2024-05-15 00:41:07.182275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.072 [2024-05-15 00:41:07.182522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.072 [2024-05-15 00:41:07.182769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.072 [2024-05-15 00:41:07.182792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.072 [2024-05-15 00:41:07.182808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.072 [2024-05-15 00:41:07.186440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.072 [2024-05-15 00:41:07.195416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.072 [2024-05-15 00:41:07.195860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.196096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.196126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.072 [2024-05-15 00:41:07.196144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.072 [2024-05-15 00:41:07.196385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.072 [2024-05-15 00:41:07.196630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.072 [2024-05-15 00:41:07.196654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.072 [2024-05-15 00:41:07.196669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.072 [2024-05-15 00:41:07.200299] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.072 [2024-05-15 00:41:07.209496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.072 [2024-05-15 00:41:07.209969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.210153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.210182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.072 [2024-05-15 00:41:07.210199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.072 [2024-05-15 00:41:07.210440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.072 [2024-05-15 00:41:07.210685] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.072 [2024-05-15 00:41:07.210709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.072 [2024-05-15 00:41:07.210724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.072 [2024-05-15 00:41:07.214361] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.072 [2024-05-15 00:41:07.223569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.072 [2024-05-15 00:41:07.224046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.224254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-05-15 00:41:07.224283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.072 [2024-05-15 00:41:07.224300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.072 [2024-05-15 00:41:07.224542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.072 [2024-05-15 00:41:07.224796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.072 [2024-05-15 00:41:07.224820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.072 [2024-05-15 00:41:07.224836] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.072 [2024-05-15 00:41:07.228509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.332 [2024-05-15 00:41:07.237520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.332 [2024-05-15 00:41:07.237983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.238275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.238326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.332 [2024-05-15 00:41:07.238343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.332 [2024-05-15 00:41:07.238592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.332 [2024-05-15 00:41:07.238838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.332 [2024-05-15 00:41:07.238862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.332 [2024-05-15 00:41:07.238877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.332 [2024-05-15 00:41:07.242505] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.332 [2024-05-15 00:41:07.251485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.332 [2024-05-15 00:41:07.251973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.252364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.252425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.332 [2024-05-15 00:41:07.252442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.332 [2024-05-15 00:41:07.252683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.332 [2024-05-15 00:41:07.252928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.332 [2024-05-15 00:41:07.252966] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.332 [2024-05-15 00:41:07.252982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.332 [2024-05-15 00:41:07.256598] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.332 [2024-05-15 00:41:07.265572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.332 [2024-05-15 00:41:07.266028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.266196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.266221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.332 [2024-05-15 00:41:07.266237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.332 [2024-05-15 00:41:07.266487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.332 [2024-05-15 00:41:07.266734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.332 [2024-05-15 00:41:07.266757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.332 [2024-05-15 00:41:07.266779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.332 [2024-05-15 00:41:07.270407] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.332 [2024-05-15 00:41:07.279597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.332 [2024-05-15 00:41:07.280046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.280228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.280257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.332 [2024-05-15 00:41:07.280274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.332 [2024-05-15 00:41:07.280516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.332 [2024-05-15 00:41:07.280761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.332 [2024-05-15 00:41:07.280784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.332 [2024-05-15 00:41:07.280800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.332 [2024-05-15 00:41:07.284437] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.332 [2024-05-15 00:41:07.293633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.332 [2024-05-15 00:41:07.294117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.294418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.294447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.332 [2024-05-15 00:41:07.294464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.332 [2024-05-15 00:41:07.294704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.332 [2024-05-15 00:41:07.294962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.332 [2024-05-15 00:41:07.294987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.332 [2024-05-15 00:41:07.295002] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.332 [2024-05-15 00:41:07.298622] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.332 [2024-05-15 00:41:07.307619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.332 [2024-05-15 00:41:07.308088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.308429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.308476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.332 [2024-05-15 00:41:07.308494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.332 [2024-05-15 00:41:07.308736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.332 [2024-05-15 00:41:07.308995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.332 [2024-05-15 00:41:07.309020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.332 [2024-05-15 00:41:07.309041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.332 [2024-05-15 00:41:07.312661] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.332 [2024-05-15 00:41:07.321643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.332 [2024-05-15 00:41:07.322148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.322572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.322623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.332 [2024-05-15 00:41:07.322641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.332 [2024-05-15 00:41:07.322882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.332 [2024-05-15 00:41:07.323137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.332 [2024-05-15 00:41:07.323162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.332 [2024-05-15 00:41:07.323177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.332 [2024-05-15 00:41:07.326802] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.332 [2024-05-15 00:41:07.335571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.332 [2024-05-15 00:41:07.336045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.336282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.336311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.332 [2024-05-15 00:41:07.336329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.332 [2024-05-15 00:41:07.336570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.332 [2024-05-15 00:41:07.336815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.332 [2024-05-15 00:41:07.336839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.332 [2024-05-15 00:41:07.336854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.332 [2024-05-15 00:41:07.340516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.332 [2024-05-15 00:41:07.349494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.332 [2024-05-15 00:41:07.349965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-05-15 00:41:07.350178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.350206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.333 [2024-05-15 00:41:07.350224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.333 [2024-05-15 00:41:07.350465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.333 [2024-05-15 00:41:07.350710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.333 [2024-05-15 00:41:07.350734] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.333 [2024-05-15 00:41:07.350749] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.333 [2024-05-15 00:41:07.354381] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.333 [2024-05-15 00:41:07.363578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.333 [2024-05-15 00:41:07.364056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.364397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.364450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.333 [2024-05-15 00:41:07.364468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.333 [2024-05-15 00:41:07.364710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.333 [2024-05-15 00:41:07.364971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.333 [2024-05-15 00:41:07.364996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.333 [2024-05-15 00:41:07.365012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.333 [2024-05-15 00:41:07.368632] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.333 [2024-05-15 00:41:07.377618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.333 [2024-05-15 00:41:07.378069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.378362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.378391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.333 [2024-05-15 00:41:07.378408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.333 [2024-05-15 00:41:07.378650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.333 [2024-05-15 00:41:07.378896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.333 [2024-05-15 00:41:07.378920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.333 [2024-05-15 00:41:07.378946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.333 [2024-05-15 00:41:07.382575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.333 [2024-05-15 00:41:07.391750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.333 [2024-05-15 00:41:07.392221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.392426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.392456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.333 [2024-05-15 00:41:07.392479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.333 [2024-05-15 00:41:07.392729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.333 [2024-05-15 00:41:07.392988] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.333 [2024-05-15 00:41:07.393012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.333 [2024-05-15 00:41:07.393028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.333 [2024-05-15 00:41:07.396732] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.333 [2024-05-15 00:41:07.405787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.333 [2024-05-15 00:41:07.406232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.406527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.406591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.333 [2024-05-15 00:41:07.406609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.333 [2024-05-15 00:41:07.406850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.333 [2024-05-15 00:41:07.407105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.333 [2024-05-15 00:41:07.407129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.333 [2024-05-15 00:41:07.407145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.333 [2024-05-15 00:41:07.410768] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.333 [2024-05-15 00:41:07.419748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.333 [2024-05-15 00:41:07.420238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.420445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.420473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.333 [2024-05-15 00:41:07.420491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.333 [2024-05-15 00:41:07.420732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.333 [2024-05-15 00:41:07.420992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.333 [2024-05-15 00:41:07.421017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.333 [2024-05-15 00:41:07.421032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.333 [2024-05-15 00:41:07.424654] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.333 [2024-05-15 00:41:07.433847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.333 [2024-05-15 00:41:07.434313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.434672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.434738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.333 [2024-05-15 00:41:07.434755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.333 [2024-05-15 00:41:07.435009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.333 [2024-05-15 00:41:07.435256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.333 [2024-05-15 00:41:07.435279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.333 [2024-05-15 00:41:07.435294] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.333 [2024-05-15 00:41:07.438917] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.333 [2024-05-15 00:41:07.447900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.333 [2024-05-15 00:41:07.448382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.448655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.448685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.333 [2024-05-15 00:41:07.448703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.333 [2024-05-15 00:41:07.448957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.333 [2024-05-15 00:41:07.449204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.333 [2024-05-15 00:41:07.449227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.333 [2024-05-15 00:41:07.449243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.333 [2024-05-15 00:41:07.452866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.333 [2024-05-15 00:41:07.461846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.333 [2024-05-15 00:41:07.462328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.462611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.462640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.333 [2024-05-15 00:41:07.462658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.333 [2024-05-15 00:41:07.462900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.333 [2024-05-15 00:41:07.463154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.333 [2024-05-15 00:41:07.463178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.333 [2024-05-15 00:41:07.463194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.333 [2024-05-15 00:41:07.466818] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.333 [2024-05-15 00:41:07.475829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.333 [2024-05-15 00:41:07.476289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.476555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-05-15 00:41:07.476584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.333 [2024-05-15 00:41:07.476601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.333 [2024-05-15 00:41:07.476842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.333 [2024-05-15 00:41:07.477104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.333 [2024-05-15 00:41:07.477129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.333 [2024-05-15 00:41:07.477144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.334 [2024-05-15 00:41:07.480771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.334 [2024-05-15 00:41:07.489774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.334 [2024-05-15 00:41:07.490267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-05-15 00:41:07.490484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-05-15 00:41:07.490513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.334 [2024-05-15 00:41:07.490537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.334 [2024-05-15 00:41:07.490779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.334 [2024-05-15 00:41:07.491038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.334 [2024-05-15 00:41:07.491063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.334 [2024-05-15 00:41:07.491078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.594 [2024-05-15 00:41:07.494761] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.594 [2024-05-15 00:41:07.503802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.594 [2024-05-15 00:41:07.504289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.504655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.504704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.594 [2024-05-15 00:41:07.504722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.594 [2024-05-15 00:41:07.504976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.594 [2024-05-15 00:41:07.505223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.594 [2024-05-15 00:41:07.505246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.594 [2024-05-15 00:41:07.505262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.594 [2024-05-15 00:41:07.508886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.594 [2024-05-15 00:41:07.517907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.594 [2024-05-15 00:41:07.518383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.518649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.518679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.594 [2024-05-15 00:41:07.518696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.594 [2024-05-15 00:41:07.518949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.594 [2024-05-15 00:41:07.519195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.594 [2024-05-15 00:41:07.519219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.594 [2024-05-15 00:41:07.519234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.594 [2024-05-15 00:41:07.522861] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.594 [2024-05-15 00:41:07.531871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.594 [2024-05-15 00:41:07.532551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.532997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.533028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.594 [2024-05-15 00:41:07.533052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.594 [2024-05-15 00:41:07.533294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.594 [2024-05-15 00:41:07.533550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.594 [2024-05-15 00:41:07.533573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.594 [2024-05-15 00:41:07.533589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.594 [2024-05-15 00:41:07.537230] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.594 [2024-05-15 00:41:07.545801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.594 [2024-05-15 00:41:07.546266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.546478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.546507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.594 [2024-05-15 00:41:07.546525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.594 [2024-05-15 00:41:07.546765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.594 [2024-05-15 00:41:07.547024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.594 [2024-05-15 00:41:07.547048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.594 [2024-05-15 00:41:07.547063] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.594 [2024-05-15 00:41:07.550693] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.594 [2024-05-15 00:41:07.559702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.594 [2024-05-15 00:41:07.560162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.560439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.560491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.594 [2024-05-15 00:41:07.560508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.594 [2024-05-15 00:41:07.560750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.594 [2024-05-15 00:41:07.561012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.594 [2024-05-15 00:41:07.561036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.594 [2024-05-15 00:41:07.561052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.594 [2024-05-15 00:41:07.564677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.594 [2024-05-15 00:41:07.573680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.594 [2024-05-15 00:41:07.574180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.574465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.574494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.594 [2024-05-15 00:41:07.574511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.594 [2024-05-15 00:41:07.574758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.594 [2024-05-15 00:41:07.575016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.594 [2024-05-15 00:41:07.575041] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.594 [2024-05-15 00:41:07.575056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.594 [2024-05-15 00:41:07.578684] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.594 [2024-05-15 00:41:07.587685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.594 [2024-05-15 00:41:07.588144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.588344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.588373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.594 [2024-05-15 00:41:07.588390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.594 [2024-05-15 00:41:07.588631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.594 [2024-05-15 00:41:07.588876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.594 [2024-05-15 00:41:07.588900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.594 [2024-05-15 00:41:07.588915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.594 [2024-05-15 00:41:07.592552] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.594 [2024-05-15 00:41:07.601765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.594 [2024-05-15 00:41:07.602222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.602434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.602463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.594 [2024-05-15 00:41:07.602480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.594 [2024-05-15 00:41:07.602722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.594 [2024-05-15 00:41:07.602983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.594 [2024-05-15 00:41:07.603008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.594 [2024-05-15 00:41:07.603023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.594 [2024-05-15 00:41:07.606647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.594 [2024-05-15 00:41:07.615849] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.594 [2024-05-15 00:41:07.616339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.616561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.594 [2024-05-15 00:41:07.616609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.594 [2024-05-15 00:41:07.616626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.594 [2024-05-15 00:41:07.616867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.594 [2024-05-15 00:41:07.617129] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.594 [2024-05-15 00:41:07.617153] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.594 [2024-05-15 00:41:07.617168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.594 [2024-05-15 00:41:07.620787] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.595 [2024-05-15 00:41:07.629765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.595 [2024-05-15 00:41:07.630255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.630460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.630488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.595 [2024-05-15 00:41:07.630506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.595 [2024-05-15 00:41:07.630747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.595 [2024-05-15 00:41:07.631003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.595 [2024-05-15 00:41:07.631027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.595 [2024-05-15 00:41:07.631043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.595 [2024-05-15 00:41:07.634664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.595 [2024-05-15 00:41:07.643765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.595 [2024-05-15 00:41:07.644226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.644468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.644497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.595 [2024-05-15 00:41:07.644515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.595 [2024-05-15 00:41:07.644756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.595 [2024-05-15 00:41:07.645012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.595 [2024-05-15 00:41:07.645036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.595 [2024-05-15 00:41:07.645051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.595 [2024-05-15 00:41:07.648678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.595 [2024-05-15 00:41:07.657692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.595 [2024-05-15 00:41:07.658171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.658533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.658594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.595 [2024-05-15 00:41:07.658612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.595 [2024-05-15 00:41:07.658853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.595 [2024-05-15 00:41:07.659164] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.595 [2024-05-15 00:41:07.659194] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.595 [2024-05-15 00:41:07.659210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.595 [2024-05-15 00:41:07.662831] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.595 [2024-05-15 00:41:07.671617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.595 [2024-05-15 00:41:07.672110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.672355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.672402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.595 [2024-05-15 00:41:07.672420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.595 [2024-05-15 00:41:07.672661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.595 [2024-05-15 00:41:07.672907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.595 [2024-05-15 00:41:07.672943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.595 [2024-05-15 00:41:07.672961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.595 [2024-05-15 00:41:07.676590] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.595 [2024-05-15 00:41:07.685587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.595 [2024-05-15 00:41:07.686032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.686309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.686365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.595 [2024-05-15 00:41:07.686383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.595 [2024-05-15 00:41:07.686624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.595 [2024-05-15 00:41:07.686869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.595 [2024-05-15 00:41:07.686892] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.595 [2024-05-15 00:41:07.686908] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.595 [2024-05-15 00:41:07.690541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.595 [2024-05-15 00:41:07.699535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.595 [2024-05-15 00:41:07.700005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.700214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.700243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.595 [2024-05-15 00:41:07.700261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.595 [2024-05-15 00:41:07.700502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.595 [2024-05-15 00:41:07.700747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.595 [2024-05-15 00:41:07.700770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.595 [2024-05-15 00:41:07.700792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.595 [2024-05-15 00:41:07.704431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.595 [2024-05-15 00:41:07.713637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.595 [2024-05-15 00:41:07.714123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.714498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.714557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.595 [2024-05-15 00:41:07.714574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.595 [2024-05-15 00:41:07.714815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.595 [2024-05-15 00:41:07.715076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.595 [2024-05-15 00:41:07.715100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.595 [2024-05-15 00:41:07.715115] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.595 [2024-05-15 00:41:07.718739] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.595 [2024-05-15 00:41:07.727722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.595 [2024-05-15 00:41:07.728204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.728412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.728441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.595 [2024-05-15 00:41:07.728458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.595 [2024-05-15 00:41:07.728699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.595 [2024-05-15 00:41:07.728956] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.595 [2024-05-15 00:41:07.728980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.595 [2024-05-15 00:41:07.728995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.595 [2024-05-15 00:41:07.732616] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.595 [2024-05-15 00:41:07.741632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.595 [2024-05-15 00:41:07.742113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.742425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.595 [2024-05-15 00:41:07.742453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.595 [2024-05-15 00:41:07.742471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.595 [2024-05-15 00:41:07.742712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.595 [2024-05-15 00:41:07.742970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.595 [2024-05-15 00:41:07.742994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.595 [2024-05-15 00:41:07.743010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.595 [2024-05-15 00:41:07.746631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.855 [2024-05-15 00:41:07.755656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.855 [2024-05-15 00:41:07.756117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.855 [2024-05-15 00:41:07.756326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.855 [2024-05-15 00:41:07.756355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.855 [2024-05-15 00:41:07.756372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.856 [2024-05-15 00:41:07.756614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.856 [2024-05-15 00:41:07.756859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.856 [2024-05-15 00:41:07.756883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.856 [2024-05-15 00:41:07.756898] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.856 [2024-05-15 00:41:07.760534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.856 [2024-05-15 00:41:07.769558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.856 [2024-05-15 00:41:07.770015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.770253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.770282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.856 [2024-05-15 00:41:07.770300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.856 [2024-05-15 00:41:07.770541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.856 [2024-05-15 00:41:07.770787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.856 [2024-05-15 00:41:07.770810] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.856 [2024-05-15 00:41:07.770825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.856 [2024-05-15 00:41:07.774461] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.856 [2024-05-15 00:41:07.783656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.856 [2024-05-15 00:41:07.784110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.784515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.784571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.856 [2024-05-15 00:41:07.784588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.856 [2024-05-15 00:41:07.784830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.856 [2024-05-15 00:41:07.785088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.856 [2024-05-15 00:41:07.785113] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.856 [2024-05-15 00:41:07.785128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.856 [2024-05-15 00:41:07.788749] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.856 [2024-05-15 00:41:07.797728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.856 [2024-05-15 00:41:07.798188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.798482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.798511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.856 [2024-05-15 00:41:07.798528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.856 [2024-05-15 00:41:07.798770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.856 [2024-05-15 00:41:07.799028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.856 [2024-05-15 00:41:07.799052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.856 [2024-05-15 00:41:07.799067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.856 [2024-05-15 00:41:07.802688] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.856 [2024-05-15 00:41:07.811684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.856 [2024-05-15 00:41:07.812185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.812499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.812528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.856 [2024-05-15 00:41:07.812545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.856 [2024-05-15 00:41:07.812787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.856 [2024-05-15 00:41:07.813046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.856 [2024-05-15 00:41:07.813071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.856 [2024-05-15 00:41:07.813087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.856 [2024-05-15 00:41:07.816713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.856 [2024-05-15 00:41:07.825729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.856 [2024-05-15 00:41:07.826225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.826438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.826467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.856 [2024-05-15 00:41:07.826484] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.856 [2024-05-15 00:41:07.826725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.856 [2024-05-15 00:41:07.826979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.856 [2024-05-15 00:41:07.827003] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.856 [2024-05-15 00:41:07.827018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.856 [2024-05-15 00:41:07.830648] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.856 [2024-05-15 00:41:07.839645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.856 [2024-05-15 00:41:07.840156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.840383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.840412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.856 [2024-05-15 00:41:07.840430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.856 [2024-05-15 00:41:07.840671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.856 [2024-05-15 00:41:07.840916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.856 [2024-05-15 00:41:07.840953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.856 [2024-05-15 00:41:07.840969] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.856 [2024-05-15 00:41:07.844595] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.856 [2024-05-15 00:41:07.853600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.856 [2024-05-15 00:41:07.854069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.854308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.856 [2024-05-15 00:41:07.854337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.857 [2024-05-15 00:41:07.854354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.857 [2024-05-15 00:41:07.854596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.857 [2024-05-15 00:41:07.854842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.857 [2024-05-15 00:41:07.854865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.857 [2024-05-15 00:41:07.854880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.857 [2024-05-15 00:41:07.858519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.857 [2024-05-15 00:41:07.867506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.857 [2024-05-15 00:41:07.868068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.868398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.868449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.857 [2024-05-15 00:41:07.868466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.857 [2024-05-15 00:41:07.868708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.857 [2024-05-15 00:41:07.868964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.857 [2024-05-15 00:41:07.868988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.857 [2024-05-15 00:41:07.869004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.857 [2024-05-15 00:41:07.872629] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.857 [2024-05-15 00:41:07.881410] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.857 [2024-05-15 00:41:07.881856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.882074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.882111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.857 [2024-05-15 00:41:07.882129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.857 [2024-05-15 00:41:07.882371] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.857 [2024-05-15 00:41:07.882618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.857 [2024-05-15 00:41:07.882641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.857 [2024-05-15 00:41:07.882656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.857 [2024-05-15 00:41:07.886291] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.857 [2024-05-15 00:41:07.895411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.857 [2024-05-15 00:41:07.895895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.896116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.896147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.857 [2024-05-15 00:41:07.896166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.857 [2024-05-15 00:41:07.896407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.857 [2024-05-15 00:41:07.896653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.857 [2024-05-15 00:41:07.896676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.857 [2024-05-15 00:41:07.896691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.857 [2024-05-15 00:41:07.900323] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.857 [2024-05-15 00:41:07.909321] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.857 [2024-05-15 00:41:07.909794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.910139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.910209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.857 [2024-05-15 00:41:07.910227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.857 [2024-05-15 00:41:07.910468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.857 [2024-05-15 00:41:07.910714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.857 [2024-05-15 00:41:07.910737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.857 [2024-05-15 00:41:07.910752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.857 [2024-05-15 00:41:07.914389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.857 [2024-05-15 00:41:07.923376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.857 [2024-05-15 00:41:07.923847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.924062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.924091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.857 [2024-05-15 00:41:07.924114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.857 [2024-05-15 00:41:07.924357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.857 [2024-05-15 00:41:07.924602] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.857 [2024-05-15 00:41:07.924625] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.857 [2024-05-15 00:41:07.924641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.857 [2024-05-15 00:41:07.928274] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.857 [2024-05-15 00:41:07.937473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.857 [2024-05-15 00:41:07.937919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.938168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.938197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.857 [2024-05-15 00:41:07.938214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.857 [2024-05-15 00:41:07.938455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.857 [2024-05-15 00:41:07.938701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.857 [2024-05-15 00:41:07.938724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.857 [2024-05-15 00:41:07.938739] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.857 [2024-05-15 00:41:07.942376] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.857 [2024-05-15 00:41:07.951377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.857 [2024-05-15 00:41:07.951860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.952046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.952077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.857 [2024-05-15 00:41:07.952095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.857 [2024-05-15 00:41:07.952337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.857 [2024-05-15 00:41:07.952582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.857 [2024-05-15 00:41:07.952605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.857 [2024-05-15 00:41:07.952620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.857 [2024-05-15 00:41:07.956254] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.857 [2024-05-15 00:41:07.965451] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.857 [2024-05-15 00:41:07.965909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.966156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.966186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.857 [2024-05-15 00:41:07.966203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.857 [2024-05-15 00:41:07.966450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.857 [2024-05-15 00:41:07.966696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.857 [2024-05-15 00:41:07.966719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.857 [2024-05-15 00:41:07.966734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.857 [2024-05-15 00:41:07.970370] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.857 [2024-05-15 00:41:07.979362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.857 [2024-05-15 00:41:07.979830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.980034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.857 [2024-05-15 00:41:07.980064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.858 [2024-05-15 00:41:07.980081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.858 [2024-05-15 00:41:07.980323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.858 [2024-05-15 00:41:07.980569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.858 [2024-05-15 00:41:07.980592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.858 [2024-05-15 00:41:07.980607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.858 [2024-05-15 00:41:07.984240] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.858 [2024-05-15 00:41:07.993439] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.858 [2024-05-15 00:41:07.993922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.858 [2024-05-15 00:41:07.994168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.858 [2024-05-15 00:41:07.994197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.858 [2024-05-15 00:41:07.994214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.858 [2024-05-15 00:41:07.994455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.858 [2024-05-15 00:41:07.994701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.858 [2024-05-15 00:41:07.994724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.858 [2024-05-15 00:41:07.994740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.858 [2024-05-15 00:41:07.998374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.858 [2024-05-15 00:41:08.007360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.858 [2024-05-15 00:41:08.007830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.858 [2024-05-15 00:41:08.008045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.858 [2024-05-15 00:41:08.008076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:41.858 [2024-05-15 00:41:08.008094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:41.858 [2024-05-15 00:41:08.008335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:41.858 [2024-05-15 00:41:08.008587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.858 [2024-05-15 00:41:08.008610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.858 [2024-05-15 00:41:08.008625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.858 [2024-05-15 00:41:08.012258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.118 [2024-05-15 00:41:08.021323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.118 [2024-05-15 00:41:08.021811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.021985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.022015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.118 [2024-05-15 00:41:08.022032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.118 [2024-05-15 00:41:08.022274] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.118 [2024-05-15 00:41:08.022525] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.118 [2024-05-15 00:41:08.022549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.118 [2024-05-15 00:41:08.022564] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.118 [2024-05-15 00:41:08.026216] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.118 [2024-05-15 00:41:08.035418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.118 [2024-05-15 00:41:08.035944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.036154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.036183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.118 [2024-05-15 00:41:08.036200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.118 [2024-05-15 00:41:08.036442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.118 [2024-05-15 00:41:08.036687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.118 [2024-05-15 00:41:08.036710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.118 [2024-05-15 00:41:08.036725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.118 [2024-05-15 00:41:08.040361] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.118 [2024-05-15 00:41:08.049343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.118 [2024-05-15 00:41:08.049834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.050042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.050072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.118 [2024-05-15 00:41:08.050090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.118 [2024-05-15 00:41:08.050331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.118 [2024-05-15 00:41:08.050576] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.118 [2024-05-15 00:41:08.050606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.118 [2024-05-15 00:41:08.050622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.118 [2024-05-15 00:41:08.054256] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.118 [2024-05-15 00:41:08.063248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.118 [2024-05-15 00:41:08.063702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.063881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.063909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.118 [2024-05-15 00:41:08.063927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.118 [2024-05-15 00:41:08.064183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.118 [2024-05-15 00:41:08.064428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.118 [2024-05-15 00:41:08.064451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.118 [2024-05-15 00:41:08.064466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.118 [2024-05-15 00:41:08.068098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.118 [2024-05-15 00:41:08.077302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.118 [2024-05-15 00:41:08.077757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.077982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.078013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.118 [2024-05-15 00:41:08.078031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.118 [2024-05-15 00:41:08.078273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.118 [2024-05-15 00:41:08.078519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.118 [2024-05-15 00:41:08.078542] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.118 [2024-05-15 00:41:08.078558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.118 [2024-05-15 00:41:08.082190] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.118 [2024-05-15 00:41:08.091395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.118 [2024-05-15 00:41:08.091872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.092088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.092118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.118 [2024-05-15 00:41:08.092136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.118 [2024-05-15 00:41:08.092378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.118 [2024-05-15 00:41:08.092623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.118 [2024-05-15 00:41:08.092647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.118 [2024-05-15 00:41:08.092668] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.118 [2024-05-15 00:41:08.096303] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.118 [2024-05-15 00:41:08.105296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.118 [2024-05-15 00:41:08.105780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.105995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.118 [2024-05-15 00:41:08.106024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.118 [2024-05-15 00:41:08.106042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.118 [2024-05-15 00:41:08.106283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.118 [2024-05-15 00:41:08.106529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.118 [2024-05-15 00:41:08.106552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.118 [2024-05-15 00:41:08.106567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.118 [2024-05-15 00:41:08.110200] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.119 [2024-05-15 00:41:08.119193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.119 [2024-05-15 00:41:08.119660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.119829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.119857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.119 [2024-05-15 00:41:08.119874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.119 [2024-05-15 00:41:08.120128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.119 [2024-05-15 00:41:08.120374] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.119 [2024-05-15 00:41:08.120397] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.119 [2024-05-15 00:41:08.120413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.119 [2024-05-15 00:41:08.124043] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.119 [2024-05-15 00:41:08.133241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.119 [2024-05-15 00:41:08.133716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.133937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.133967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.119 [2024-05-15 00:41:08.133985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.119 [2024-05-15 00:41:08.134226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.119 [2024-05-15 00:41:08.134471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.119 [2024-05-15 00:41:08.134494] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.119 [2024-05-15 00:41:08.134509] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.119 [2024-05-15 00:41:08.138144] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.119 [2024-05-15 00:41:08.147262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.119 [2024-05-15 00:41:08.147746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.147969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.148001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.119 [2024-05-15 00:41:08.148019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.119 [2024-05-15 00:41:08.148260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.119 [2024-05-15 00:41:08.148505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.119 [2024-05-15 00:41:08.148528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.119 [2024-05-15 00:41:08.148543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.119 [2024-05-15 00:41:08.152174] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.119 [2024-05-15 00:41:08.161367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.119 [2024-05-15 00:41:08.161848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.162099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.162129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.119 [2024-05-15 00:41:08.162146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.119 [2024-05-15 00:41:08.162388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.119 [2024-05-15 00:41:08.162634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.119 [2024-05-15 00:41:08.162657] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.119 [2024-05-15 00:41:08.162672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.119 [2024-05-15 00:41:08.166304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.119 [2024-05-15 00:41:08.175325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.119 [2024-05-15 00:41:08.175804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.176010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.176040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.119 [2024-05-15 00:41:08.176058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.119 [2024-05-15 00:41:08.176299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.119 [2024-05-15 00:41:08.176544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.119 [2024-05-15 00:41:08.176567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.119 [2024-05-15 00:41:08.176582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.119 [2024-05-15 00:41:08.180288] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.119 [2024-05-15 00:41:08.189273] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.119 [2024-05-15 00:41:08.189704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.189908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.189946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.119 [2024-05-15 00:41:08.189966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.119 [2024-05-15 00:41:08.190208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.119 [2024-05-15 00:41:08.190453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.119 [2024-05-15 00:41:08.190476] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.119 [2024-05-15 00:41:08.190492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.119 [2024-05-15 00:41:08.194118] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.119 [2024-05-15 00:41:08.203315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.119 [2024-05-15 00:41:08.203759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.203974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.204004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.119 [2024-05-15 00:41:08.204022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.119 [2024-05-15 00:41:08.204263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.119 [2024-05-15 00:41:08.204509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.119 [2024-05-15 00:41:08.204532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.119 [2024-05-15 00:41:08.204547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.119 [2024-05-15 00:41:08.208181] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.119 [2024-05-15 00:41:08.217378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.119 [2024-05-15 00:41:08.217850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.218072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.218103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.119 [2024-05-15 00:41:08.218120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.119 [2024-05-15 00:41:08.218362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.119 [2024-05-15 00:41:08.218607] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.119 [2024-05-15 00:41:08.218630] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.119 [2024-05-15 00:41:08.218646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.119 [2024-05-15 00:41:08.222279] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.119 [2024-05-15 00:41:08.231265] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.119 [2024-05-15 00:41:08.231714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.231925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.231964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.119 [2024-05-15 00:41:08.231982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.119 [2024-05-15 00:41:08.232223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.119 [2024-05-15 00:41:08.232468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.119 [2024-05-15 00:41:08.232492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.119 [2024-05-15 00:41:08.232507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.119 [2024-05-15 00:41:08.236137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.119 [2024-05-15 00:41:08.245332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.119 [2024-05-15 00:41:08.245810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.246000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.119 [2024-05-15 00:41:08.246031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.119 [2024-05-15 00:41:08.246049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.119 [2024-05-15 00:41:08.246290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.120 [2024-05-15 00:41:08.246536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.120 [2024-05-15 00:41:08.246559] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.120 [2024-05-15 00:41:08.246574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.120 [2024-05-15 00:41:08.250207] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.120 [2024-05-15 00:41:08.259402] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.120 [2024-05-15 00:41:08.260012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.120 [2024-05-15 00:41:08.260255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.120 [2024-05-15 00:41:08.260284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.120 [2024-05-15 00:41:08.260302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.120 [2024-05-15 00:41:08.260544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.120 [2024-05-15 00:41:08.260789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.120 [2024-05-15 00:41:08.260811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.120 [2024-05-15 00:41:08.260827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.120 [2024-05-15 00:41:08.264461] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.120 [2024-05-15 00:41:08.273446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.120 [2024-05-15 00:41:08.273868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.120 [2024-05-15 00:41:08.274107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.120 [2024-05-15 00:41:08.274142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.120 [2024-05-15 00:41:08.274160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.120 [2024-05-15 00:41:08.274402] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.120 [2024-05-15 00:41:08.274647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.120 [2024-05-15 00:41:08.274670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.120 [2024-05-15 00:41:08.274686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.120 [2024-05-15 00:41:08.278359] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.414 [2024-05-15 00:41:08.287729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.414 [2024-05-15 00:41:08.288213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.414 [2024-05-15 00:41:08.288430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.414 [2024-05-15 00:41:08.288465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.414 [2024-05-15 00:41:08.288486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.414 [2024-05-15 00:41:08.288743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.414 [2024-05-15 00:41:08.289018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.414 [2024-05-15 00:41:08.289048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.414 [2024-05-15 00:41:08.289066] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.414 [2024-05-15 00:41:08.292836] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.414 [2024-05-15 00:41:08.301829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.414 [2024-05-15 00:41:08.302314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.414 [2024-05-15 00:41:08.302504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.414 [2024-05-15 00:41:08.302533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.414 [2024-05-15 00:41:08.302551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.414 [2024-05-15 00:41:08.302792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.414 [2024-05-15 00:41:08.303048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.414 [2024-05-15 00:41:08.303072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.414 [2024-05-15 00:41:08.303088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.414 [2024-05-15 00:41:08.306707] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.414 [2024-05-15 00:41:08.315722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.414 [2024-05-15 00:41:08.316211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.414 [2024-05-15 00:41:08.316420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.414 [2024-05-15 00:41:08.316449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.414 [2024-05-15 00:41:08.316474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.414 [2024-05-15 00:41:08.316717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.414 [2024-05-15 00:41:08.316972] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.414 [2024-05-15 00:41:08.316996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.414 [2024-05-15 00:41:08.317012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.414 [2024-05-15 00:41:08.320633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.414 [2024-05-15 00:41:08.329830] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.414 [2024-05-15 00:41:08.330288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.414 [2024-05-15 00:41:08.330497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.414 [2024-05-15 00:41:08.330526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.414 [2024-05-15 00:41:08.330544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.414 [2024-05-15 00:41:08.330785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.414 [2024-05-15 00:41:08.331043] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.414 [2024-05-15 00:41:08.331067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.414 [2024-05-15 00:41:08.331083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.414 [2024-05-15 00:41:08.334705] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.414 [2024-05-15 00:41:08.343890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.414 [2024-05-15 00:41:08.344376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.414 [2024-05-15 00:41:08.344621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.414 [2024-05-15 00:41:08.344667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.414 [2024-05-15 00:41:08.344685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.414 [2024-05-15 00:41:08.344926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.414 [2024-05-15 00:41:08.345183] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.414 [2024-05-15 00:41:08.345206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.414 [2024-05-15 00:41:08.345222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.414 [2024-05-15 00:41:08.348839] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.414 [2024-05-15 00:41:08.357808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.414 [2024-05-15 00:41:08.358273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.358454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.358484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.415 [2024-05-15 00:41:08.358502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.415 [2024-05-15 00:41:08.358750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.415 [2024-05-15 00:41:08.359008] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.415 [2024-05-15 00:41:08.359032] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.415 [2024-05-15 00:41:08.359048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.415 [2024-05-15 00:41:08.362673] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.415 [2024-05-15 00:41:08.371734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.415 [2024-05-15 00:41:08.372199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.372413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.372441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.415 [2024-05-15 00:41:08.372459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.415 [2024-05-15 00:41:08.372700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.415 [2024-05-15 00:41:08.372962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.415 [2024-05-15 00:41:08.372986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.415 [2024-05-15 00:41:08.373002] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.415 [2024-05-15 00:41:08.376624] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.415 [2024-05-15 00:41:08.385809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.415 [2024-05-15 00:41:08.386245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.386452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.386481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.415 [2024-05-15 00:41:08.386498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.415 [2024-05-15 00:41:08.386739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.415 [2024-05-15 00:41:08.386996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.415 [2024-05-15 00:41:08.387020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.415 [2024-05-15 00:41:08.387035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.415 [2024-05-15 00:41:08.390655] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.415 [2024-05-15 00:41:08.399750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.415 [2024-05-15 00:41:08.400218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.400403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.400431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.415 [2024-05-15 00:41:08.400449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.415 [2024-05-15 00:41:08.400689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.415 [2024-05-15 00:41:08.400949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.415 [2024-05-15 00:41:08.400974] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.415 [2024-05-15 00:41:08.400989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.415 [2024-05-15 00:41:08.404608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.415 [2024-05-15 00:41:08.413795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.415 [2024-05-15 00:41:08.414287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.414536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.414565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.415 [2024-05-15 00:41:08.414582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.415 [2024-05-15 00:41:08.414823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.415 [2024-05-15 00:41:08.415080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.415 [2024-05-15 00:41:08.415104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.415 [2024-05-15 00:41:08.415119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.415 [2024-05-15 00:41:08.418739] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.415 [2024-05-15 00:41:08.427715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.415 [2024-05-15 00:41:08.428173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.428382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.428410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.415 [2024-05-15 00:41:08.428428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.415 [2024-05-15 00:41:08.428669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.415 [2024-05-15 00:41:08.428915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.415 [2024-05-15 00:41:08.428949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.415 [2024-05-15 00:41:08.428966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.415 [2024-05-15 00:41:08.432586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.415 [2024-05-15 00:41:08.441769] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.415 [2024-05-15 00:41:08.442253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.442495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.442524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.415 [2024-05-15 00:41:08.442542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.415 [2024-05-15 00:41:08.442783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.415 [2024-05-15 00:41:08.443041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.415 [2024-05-15 00:41:08.443071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.415 [2024-05-15 00:41:08.443087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.415 [2024-05-15 00:41:08.446705] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.415 [2024-05-15 00:41:08.455675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.415 [2024-05-15 00:41:08.456133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.456393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.415 [2024-05-15 00:41:08.456422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.415 [2024-05-15 00:41:08.456440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.415 [2024-05-15 00:41:08.456682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.415 [2024-05-15 00:41:08.456927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.415 [2024-05-15 00:41:08.456961] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.415 [2024-05-15 00:41:08.456977] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.415 [2024-05-15 00:41:08.460600] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.415 [2024-05-15 00:41:08.469591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.415 [2024-05-15 00:41:08.470040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.470226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.470255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.416 [2024-05-15 00:41:08.470272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.416 [2024-05-15 00:41:08.470513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.416 [2024-05-15 00:41:08.470758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.416 [2024-05-15 00:41:08.470782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.416 [2024-05-15 00:41:08.470797] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.416 [2024-05-15 00:41:08.474433] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.416 [2024-05-15 00:41:08.483622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.416 [2024-05-15 00:41:08.484077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.484256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.484286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.416 [2024-05-15 00:41:08.484304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.416 [2024-05-15 00:41:08.484546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.416 [2024-05-15 00:41:08.484792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.416 [2024-05-15 00:41:08.484815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.416 [2024-05-15 00:41:08.484836] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.416 [2024-05-15 00:41:08.488468] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.416 [2024-05-15 00:41:08.497660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.416 [2024-05-15 00:41:08.498145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.498394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.498423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.416 [2024-05-15 00:41:08.498440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.416 [2024-05-15 00:41:08.498681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.416 [2024-05-15 00:41:08.498927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.416 [2024-05-15 00:41:08.498960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.416 [2024-05-15 00:41:08.498976] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.416 [2024-05-15 00:41:08.502598] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.416 [2024-05-15 00:41:08.511574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.416 [2024-05-15 00:41:08.512049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.512256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.512285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.416 [2024-05-15 00:41:08.512302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.416 [2024-05-15 00:41:08.512544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.416 [2024-05-15 00:41:08.512790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.416 [2024-05-15 00:41:08.512812] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.416 [2024-05-15 00:41:08.512828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.416 [2024-05-15 00:41:08.516459] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.416 [2024-05-15 00:41:08.525648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.416 [2024-05-15 00:41:08.526097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.526345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.526373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.416 [2024-05-15 00:41:08.526390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.416 [2024-05-15 00:41:08.526631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.416 [2024-05-15 00:41:08.526877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.416 [2024-05-15 00:41:08.526900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.416 [2024-05-15 00:41:08.526915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.416 [2024-05-15 00:41:08.530557] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.416 [2024-05-15 00:41:08.539532] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.416 [2024-05-15 00:41:08.539982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.540228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.540256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.416 [2024-05-15 00:41:08.540274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.416 [2024-05-15 00:41:08.540515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.416 [2024-05-15 00:41:08.540761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.416 [2024-05-15 00:41:08.540784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.416 [2024-05-15 00:41:08.540799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.416 [2024-05-15 00:41:08.544431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.416 [2024-05-15 00:41:08.553645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.416 [2024-05-15 00:41:08.554112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.554317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.416 [2024-05-15 00:41:08.554349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.416 [2024-05-15 00:41:08.554372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.416 [2024-05-15 00:41:08.554625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.416 [2024-05-15 00:41:08.554877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.416 [2024-05-15 00:41:08.554903] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.416 [2024-05-15 00:41:08.554926] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.676 [2024-05-15 00:41:08.558767] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.676 [2024-05-15 00:41:08.567545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.676 [2024-05-15 00:41:08.568056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.568301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.568330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.676 [2024-05-15 00:41:08.568347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.676 [2024-05-15 00:41:08.568589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.676 [2024-05-15 00:41:08.568834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.676 [2024-05-15 00:41:08.568857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.676 [2024-05-15 00:41:08.568873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.676 [2024-05-15 00:41:08.572501] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.676 [2024-05-15 00:41:08.581504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.676 [2024-05-15 00:41:08.581988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.582176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.582205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.676 [2024-05-15 00:41:08.582223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.676 [2024-05-15 00:41:08.582464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.676 [2024-05-15 00:41:08.582710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.676 [2024-05-15 00:41:08.582733] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.676 [2024-05-15 00:41:08.582749] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.676 [2024-05-15 00:41:08.586378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.676 [2024-05-15 00:41:08.595572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.676 [2024-05-15 00:41:08.596018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.596233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.596261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.676 [2024-05-15 00:41:08.596278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.676 [2024-05-15 00:41:08.596519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.676 [2024-05-15 00:41:08.596765] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.676 [2024-05-15 00:41:08.596789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.676 [2024-05-15 00:41:08.596804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.676 [2024-05-15 00:41:08.600435] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.676 [2024-05-15 00:41:08.609624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.676 [2024-05-15 00:41:08.610048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.610294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.610323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.676 [2024-05-15 00:41:08.610340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.676 [2024-05-15 00:41:08.610582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.676 [2024-05-15 00:41:08.610828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.676 [2024-05-15 00:41:08.610852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.676 [2024-05-15 00:41:08.610867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.676 [2024-05-15 00:41:08.614501] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.676 [2024-05-15 00:41:08.623694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.676 [2024-05-15 00:41:08.624154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.624379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.624408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.676 [2024-05-15 00:41:08.624425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.676 [2024-05-15 00:41:08.624666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.676 [2024-05-15 00:41:08.624912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.676 [2024-05-15 00:41:08.624945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.676 [2024-05-15 00:41:08.624963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.676 [2024-05-15 00:41:08.628589] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.676 [2024-05-15 00:41:08.637796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.676 [2024-05-15 00:41:08.638265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.638510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.638539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.676 [2024-05-15 00:41:08.638556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.676 [2024-05-15 00:41:08.638797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.676 [2024-05-15 00:41:08.639054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.676 [2024-05-15 00:41:08.639078] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.676 [2024-05-15 00:41:08.639094] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.676 [2024-05-15 00:41:08.642715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.676 [2024-05-15 00:41:08.651839] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.676 [2024-05-15 00:41:08.652322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.652546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.652575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.676 [2024-05-15 00:41:08.652593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.676 [2024-05-15 00:41:08.652834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.676 [2024-05-15 00:41:08.653092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.676 [2024-05-15 00:41:08.653116] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.676 [2024-05-15 00:41:08.653132] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.676 [2024-05-15 00:41:08.656763] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.676 [2024-05-15 00:41:08.665771] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.676 [2024-05-15 00:41:08.666208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.666448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.676 [2024-05-15 00:41:08.666483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.676 [2024-05-15 00:41:08.666501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.676 [2024-05-15 00:41:08.666742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.676 [2024-05-15 00:41:08.666998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.676 [2024-05-15 00:41:08.667033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.676 [2024-05-15 00:41:08.667049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.676 [2024-05-15 00:41:08.670672] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.677 [2024-05-15 00:41:08.679668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.677 [2024-05-15 00:41:08.680136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.680347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.680376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.677 [2024-05-15 00:41:08.680393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.677 [2024-05-15 00:41:08.680634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.677 [2024-05-15 00:41:08.680880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.677 [2024-05-15 00:41:08.680903] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.677 [2024-05-15 00:41:08.680919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.677 [2024-05-15 00:41:08.684560] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.677 [2024-05-15 00:41:08.693755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.677 [2024-05-15 00:41:08.694187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.694391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.694420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.677 [2024-05-15 00:41:08.694437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.677 [2024-05-15 00:41:08.694678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.677 [2024-05-15 00:41:08.694924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.677 [2024-05-15 00:41:08.694958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.677 [2024-05-15 00:41:08.694974] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.677 [2024-05-15 00:41:08.698595] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.677 [2024-05-15 00:41:08.707790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.677 [2024-05-15 00:41:08.708257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.708446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.708475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.677 [2024-05-15 00:41:08.708497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.677 [2024-05-15 00:41:08.708739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.677 [2024-05-15 00:41:08.708997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.677 [2024-05-15 00:41:08.709022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.677 [2024-05-15 00:41:08.709038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.677 [2024-05-15 00:41:08.712659] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.677 [2024-05-15 00:41:08.721857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.677 [2024-05-15 00:41:08.722349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.722547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.722577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.677 [2024-05-15 00:41:08.722595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.677 [2024-05-15 00:41:08.722836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.677 [2024-05-15 00:41:08.723092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.677 [2024-05-15 00:41:08.723116] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.677 [2024-05-15 00:41:08.723132] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.677 [2024-05-15 00:41:08.726753] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.677 [2024-05-15 00:41:08.735961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.677 [2024-05-15 00:41:08.736434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.736621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.736648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.677 [2024-05-15 00:41:08.736665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.677 [2024-05-15 00:41:08.736907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.677 [2024-05-15 00:41:08.737160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.677 [2024-05-15 00:41:08.737185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.677 [2024-05-15 00:41:08.737200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.677 [2024-05-15 00:41:08.740823] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.677 [2024-05-15 00:41:08.750066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.677 [2024-05-15 00:41:08.750491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.750695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.750724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.677 [2024-05-15 00:41:08.750741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.677 [2024-05-15 00:41:08.750998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.677 [2024-05-15 00:41:08.751245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.677 [2024-05-15 00:41:08.751269] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.677 [2024-05-15 00:41:08.751284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.677 [2024-05-15 00:41:08.754908] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.677 [2024-05-15 00:41:08.764111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.677 [2024-05-15 00:41:08.764563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.764770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.764799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.677 [2024-05-15 00:41:08.764816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.677 [2024-05-15 00:41:08.765067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.677 [2024-05-15 00:41:08.765313] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.677 [2024-05-15 00:41:08.765337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.677 [2024-05-15 00:41:08.765352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.677 [2024-05-15 00:41:08.768985] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.677 [2024-05-15 00:41:08.778180] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.677 [2024-05-15 00:41:08.778650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.778867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.778896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.677 [2024-05-15 00:41:08.778913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.677 [2024-05-15 00:41:08.779164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.677 [2024-05-15 00:41:08.779410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.677 [2024-05-15 00:41:08.779433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.677 [2024-05-15 00:41:08.779449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.677 [2024-05-15 00:41:08.783148] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.677 [2024-05-15 00:41:08.792132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.677 [2024-05-15 00:41:08.792578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.792768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.792797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.677 [2024-05-15 00:41:08.792814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.677 [2024-05-15 00:41:08.793068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.677 [2024-05-15 00:41:08.793320] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.677 [2024-05-15 00:41:08.793345] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.677 [2024-05-15 00:41:08.793360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.677 [2024-05-15 00:41:08.796988] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.677 [2024-05-15 00:41:08.806187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.677 [2024-05-15 00:41:08.806674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.806850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.677 [2024-05-15 00:41:08.806877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.678 [2024-05-15 00:41:08.806893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.678 [2024-05-15 00:41:08.807146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.678 [2024-05-15 00:41:08.807393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.678 [2024-05-15 00:41:08.807417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.678 [2024-05-15 00:41:08.807432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.678 [2024-05-15 00:41:08.811078] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.678 [2024-05-15 00:41:08.820101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.678 [2024-05-15 00:41:08.820550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.678 [2024-05-15 00:41:08.820793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.678 [2024-05-15 00:41:08.820821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.678 [2024-05-15 00:41:08.820838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.678 [2024-05-15 00:41:08.821089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.678 [2024-05-15 00:41:08.821336] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.678 [2024-05-15 00:41:08.821359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.678 [2024-05-15 00:41:08.821375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.678 [2024-05-15 00:41:08.825002] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.678 [2024-05-15 00:41:08.834214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.678 [2024-05-15 00:41:08.834704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.678 [2024-05-15 00:41:08.834909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.678 [2024-05-15 00:41:08.834945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.678 [2024-05-15 00:41:08.834965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.678 [2024-05-15 00:41:08.835206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.678 [2024-05-15 00:41:08.835452] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.678 [2024-05-15 00:41:08.835481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.678 [2024-05-15 00:41:08.835497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.937 [2024-05-15 00:41:08.839186] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.937 [2024-05-15 00:41:08.848198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.937 [2024-05-15 00:41:08.848666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.937 [2024-05-15 00:41:08.848908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.937 [2024-05-15 00:41:08.848945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.937 [2024-05-15 00:41:08.848965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.937 [2024-05-15 00:41:08.849208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.937 [2024-05-15 00:41:08.849454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.937 [2024-05-15 00:41:08.849478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.937 [2024-05-15 00:41:08.849494] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.937 [2024-05-15 00:41:08.853122] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.938 [2024-05-15 00:41:08.862106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.938 [2024-05-15 00:41:08.862579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.862804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.862832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.938 [2024-05-15 00:41:08.862849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.938 [2024-05-15 00:41:08.863099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.938 [2024-05-15 00:41:08.863345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.938 [2024-05-15 00:41:08.863368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.938 [2024-05-15 00:41:08.863384] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.938 [2024-05-15 00:41:08.867016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.938 [2024-05-15 00:41:08.876019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.938 [2024-05-15 00:41:08.876517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.876718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.876747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.938 [2024-05-15 00:41:08.876764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.938 [2024-05-15 00:41:08.877018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.938 [2024-05-15 00:41:08.877265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.938 [2024-05-15 00:41:08.877288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.938 [2024-05-15 00:41:08.877310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.938 [2024-05-15 00:41:08.880937] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.938 [2024-05-15 00:41:08.890133] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.938 [2024-05-15 00:41:08.890618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.890858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.890887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.938 [2024-05-15 00:41:08.890904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.938 [2024-05-15 00:41:08.891156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.938 [2024-05-15 00:41:08.891402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.938 [2024-05-15 00:41:08.891426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.938 [2024-05-15 00:41:08.891441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.938 [2024-05-15 00:41:08.895111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.938 [2024-05-15 00:41:08.904182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.938 [2024-05-15 00:41:08.904630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.904840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.904869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.938 [2024-05-15 00:41:08.904887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.938 [2024-05-15 00:41:08.905139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.938 [2024-05-15 00:41:08.905385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.938 [2024-05-15 00:41:08.905410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.938 [2024-05-15 00:41:08.905425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.938 [2024-05-15 00:41:08.909051] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.938 [2024-05-15 00:41:08.918244] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.938 [2024-05-15 00:41:08.918712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.918914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.918951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.938 [2024-05-15 00:41:08.918981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.938 [2024-05-15 00:41:08.919222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.938 [2024-05-15 00:41:08.919467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.938 [2024-05-15 00:41:08.919490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.938 [2024-05-15 00:41:08.919505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.938 [2024-05-15 00:41:08.923137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.938 [2024-05-15 00:41:08.932333] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.938 [2024-05-15 00:41:08.932816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.933035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.933065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.938 [2024-05-15 00:41:08.933083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.938 [2024-05-15 00:41:08.933324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.938 [2024-05-15 00:41:08.933570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.938 [2024-05-15 00:41:08.933593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.938 [2024-05-15 00:41:08.933608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.938 [2024-05-15 00:41:08.937235] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.938 [2024-05-15 00:41:08.946424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.938 [2024-05-15 00:41:08.946896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.947093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.947123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.938 [2024-05-15 00:41:08.947140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.938 [2024-05-15 00:41:08.947382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.938 [2024-05-15 00:41:08.947627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.938 [2024-05-15 00:41:08.947650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.938 [2024-05-15 00:41:08.947666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.938 [2024-05-15 00:41:08.951293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.938 [2024-05-15 00:41:08.960479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.938 [2024-05-15 00:41:08.960907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.961126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.961156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.938 [2024-05-15 00:41:08.961173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.938 [2024-05-15 00:41:08.961414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.938 [2024-05-15 00:41:08.961660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.938 [2024-05-15 00:41:08.961684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.938 [2024-05-15 00:41:08.961699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.938 [2024-05-15 00:41:08.965328] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.938 [2024-05-15 00:41:08.974520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.938 [2024-05-15 00:41:08.974977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.975168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.938 [2024-05-15 00:41:08.975197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.938 [2024-05-15 00:41:08.975215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.938 [2024-05-15 00:41:08.975456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.938 [2024-05-15 00:41:08.975702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.938 [2024-05-15 00:41:08.975725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.938 [2024-05-15 00:41:08.975741] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.938 [2024-05-15 00:41:08.979374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.938 [2024-05-15 00:41:08.988583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.938 [2024-05-15 00:41:08.989074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:08.989291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:08.989320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.939 [2024-05-15 00:41:08.989338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.939 [2024-05-15 00:41:08.989579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.939 [2024-05-15 00:41:08.989825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.939 [2024-05-15 00:41:08.989849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.939 [2024-05-15 00:41:08.989864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.939 [2024-05-15 00:41:08.993497] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.939 [2024-05-15 00:41:09.002475] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.939 [2024-05-15 00:41:09.002927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.003142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.003172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.939 [2024-05-15 00:41:09.003189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.939 [2024-05-15 00:41:09.003431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.939 [2024-05-15 00:41:09.003676] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.939 [2024-05-15 00:41:09.003699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.939 [2024-05-15 00:41:09.003715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.939 [2024-05-15 00:41:09.007346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.939 [2024-05-15 00:41:09.016538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.939 [2024-05-15 00:41:09.017014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.017230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.017261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.939 [2024-05-15 00:41:09.017279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.939 [2024-05-15 00:41:09.017521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.939 [2024-05-15 00:41:09.017767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.939 [2024-05-15 00:41:09.017790] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.939 [2024-05-15 00:41:09.017806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.939 [2024-05-15 00:41:09.021435] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.939 [2024-05-15 00:41:09.030625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.939 [2024-05-15 00:41:09.031098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.031308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.031337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.939 [2024-05-15 00:41:09.031354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.939 [2024-05-15 00:41:09.031595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.939 [2024-05-15 00:41:09.031842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.939 [2024-05-15 00:41:09.031865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.939 [2024-05-15 00:41:09.031880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.939 [2024-05-15 00:41:09.035509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.939 [2024-05-15 00:41:09.044701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.939 [2024-05-15 00:41:09.045164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.045371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.045399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.939 [2024-05-15 00:41:09.045417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.939 [2024-05-15 00:41:09.045658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.939 [2024-05-15 00:41:09.045903] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.939 [2024-05-15 00:41:09.045927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.939 [2024-05-15 00:41:09.045955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.939 [2024-05-15 00:41:09.049579] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.939 [2024-05-15 00:41:09.058772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.939 [2024-05-15 00:41:09.059256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.059430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.059464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.939 [2024-05-15 00:41:09.059482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.939 [2024-05-15 00:41:09.059724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.939 [2024-05-15 00:41:09.059981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.939 [2024-05-15 00:41:09.060005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.939 [2024-05-15 00:41:09.060021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.939 [2024-05-15 00:41:09.063642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.939 [2024-05-15 00:41:09.072835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.939 [2024-05-15 00:41:09.073334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.073511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.073540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.939 [2024-05-15 00:41:09.073558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.939 [2024-05-15 00:41:09.073799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.939 [2024-05-15 00:41:09.074056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.939 [2024-05-15 00:41:09.074080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.939 [2024-05-15 00:41:09.074096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.939 [2024-05-15 00:41:09.077716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.939 [2024-05-15 00:41:09.086905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.939 [2024-05-15 00:41:09.087365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.087607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.939 [2024-05-15 00:41:09.087636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:42.939 [2024-05-15 00:41:09.087653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:42.939 [2024-05-15 00:41:09.087894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:42.939 [2024-05-15 00:41:09.088150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.939 [2024-05-15 00:41:09.088174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.939 [2024-05-15 00:41:09.088189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.939 [2024-05-15 00:41:09.091810] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.197 [2024-05-15 00:41:09.100845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.197 [2024-05-15 00:41:09.101278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.197 [2024-05-15 00:41:09.101466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.197 [2024-05-15 00:41:09.101493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.197 [2024-05-15 00:41:09.101518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.197 [2024-05-15 00:41:09.101760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.197 [2024-05-15 00:41:09.102017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.197 [2024-05-15 00:41:09.102042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.197 [2024-05-15 00:41:09.102057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.197 [2024-05-15 00:41:09.105701] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.197 [2024-05-15 00:41:09.114889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.197 [2024-05-15 00:41:09.115371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.197 [2024-05-15 00:41:09.115551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.197 [2024-05-15 00:41:09.115580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.197 [2024-05-15 00:41:09.115597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.197 [2024-05-15 00:41:09.115838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.197 [2024-05-15 00:41:09.116093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.197 [2024-05-15 00:41:09.116118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.197 [2024-05-15 00:41:09.116133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.197 [2024-05-15 00:41:09.119756] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.197 [2024-05-15 00:41:09.128949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.197 [2024-05-15 00:41:09.129428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.197 [2024-05-15 00:41:09.129622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.197 [2024-05-15 00:41:09.129652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.197 [2024-05-15 00:41:09.129670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.197 [2024-05-15 00:41:09.129911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.197 [2024-05-15 00:41:09.130167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.197 [2024-05-15 00:41:09.130191] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.197 [2024-05-15 00:41:09.130206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.197 [2024-05-15 00:41:09.133861] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.197 [2024-05-15 00:41:09.142841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.197 [2024-05-15 00:41:09.143296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.197 [2024-05-15 00:41:09.143509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.197 [2024-05-15 00:41:09.143539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.197 [2024-05-15 00:41:09.143557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.197 [2024-05-15 00:41:09.143805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.197 [2024-05-15 00:41:09.144062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.197 [2024-05-15 00:41:09.144086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.197 [2024-05-15 00:41:09.144101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.197 [2024-05-15 00:41:09.147722] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.197 [2024-05-15 00:41:09.156818] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.197 [2024-05-15 00:41:09.157250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.157490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.157519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.157536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.157778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.158034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.158058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.158073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.161693] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.170881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.171363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.171583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.171612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.171629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.171871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.172127] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.172151] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.172167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.175793] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.184774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.185237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.185473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.185501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.185518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.185760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.186027] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.186051] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.186067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.189685] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.198676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.199167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.199383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.199411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.199428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.199670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.199915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.199949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.199965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.203584] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.212596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.213071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.213275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.213303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.213320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.213562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.213807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.213830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.213846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.217476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.226662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.227165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.227404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.227433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.227450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.227691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.227946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.227976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.227992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.231613] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.240592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.241051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.241296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.241324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.241342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.241584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.241829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.241852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.241867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.245498] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.254686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.255163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.255341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.255369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.255387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.255627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.255872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.255896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.255911] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.259541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.268728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.269218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.269395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.269423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.269441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.269682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.269927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.269960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.269982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.273607] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.282793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.283251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.283499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.283528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.283545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.283787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.284044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.284068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.284084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.287737] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.296814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.297280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.297489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.297518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.297536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.297778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.298036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.298060] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.298075] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.301695] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.310886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.311339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.311526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.311554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.311572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.311812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.312068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.312093] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.312108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.315745] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.324940] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.325414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.325619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.325648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.325665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.325906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.326161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.326186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.326201] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.329816] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.339009] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.339481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.339693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.339722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.339740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.339993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.340240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.340263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.340279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.343897] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.198 [2024-05-15 00:41:09.353097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.198 [2024-05-15 00:41:09.353569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.353753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.198 [2024-05-15 00:41:09.353781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.198 [2024-05-15 00:41:09.353798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.198 [2024-05-15 00:41:09.354051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.198 [2024-05-15 00:41:09.354297] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.198 [2024-05-15 00:41:09.354320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.198 [2024-05-15 00:41:09.354335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.198 [2024-05-15 00:41:09.357984] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.458 [2024-05-15 00:41:09.367057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.458 [2024-05-15 00:41:09.367595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.367815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.367844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.458 [2024-05-15 00:41:09.367861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.458 [2024-05-15 00:41:09.368110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.458 [2024-05-15 00:41:09.368356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.458 [2024-05-15 00:41:09.368379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.458 [2024-05-15 00:41:09.368395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.458 [2024-05-15 00:41:09.372027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.458 [2024-05-15 00:41:09.381025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.458 [2024-05-15 00:41:09.381681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.381917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.381955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.458 [2024-05-15 00:41:09.381973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.458 [2024-05-15 00:41:09.382214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.458 [2024-05-15 00:41:09.382459] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.458 [2024-05-15 00:41:09.382482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.458 [2024-05-15 00:41:09.382498] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.458 [2024-05-15 00:41:09.386126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.458 [2024-05-15 00:41:09.394995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.458 [2024-05-15 00:41:09.395533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.395851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.395910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.458 [2024-05-15 00:41:09.395927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.458 [2024-05-15 00:41:09.396180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.458 [2024-05-15 00:41:09.396425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.458 [2024-05-15 00:41:09.396448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.458 [2024-05-15 00:41:09.396463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.458 [2024-05-15 00:41:09.400091] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.458 [2024-05-15 00:41:09.408990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.458 [2024-05-15 00:41:09.409545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.409981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.410011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.458 [2024-05-15 00:41:09.410029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.458 [2024-05-15 00:41:09.410270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.458 [2024-05-15 00:41:09.410516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.458 [2024-05-15 00:41:09.410539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.458 [2024-05-15 00:41:09.410554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.458 [2024-05-15 00:41:09.414186] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.458 [2024-05-15 00:41:09.422962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.458 [2024-05-15 00:41:09.423443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.423813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.423867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.458 [2024-05-15 00:41:09.423884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.458 [2024-05-15 00:41:09.424134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.458 [2024-05-15 00:41:09.424380] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.458 [2024-05-15 00:41:09.424403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.458 [2024-05-15 00:41:09.424419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.458 [2024-05-15 00:41:09.428051] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.458 [2024-05-15 00:41:09.437034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.458 [2024-05-15 00:41:09.437506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.437790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.437818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.458 [2024-05-15 00:41:09.437835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.458 [2024-05-15 00:41:09.438089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.458 [2024-05-15 00:41:09.438335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.458 [2024-05-15 00:41:09.438359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.458 [2024-05-15 00:41:09.438375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.458 [2024-05-15 00:41:09.442004] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.458 [2024-05-15 00:41:09.450988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.458 [2024-05-15 00:41:09.451452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.451823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.451883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.458 [2024-05-15 00:41:09.451901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.458 [2024-05-15 00:41:09.452152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.458 [2024-05-15 00:41:09.452398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.458 [2024-05-15 00:41:09.452422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.458 [2024-05-15 00:41:09.452437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.458 [2024-05-15 00:41:09.456068] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.458 [2024-05-15 00:41:09.465053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.458 [2024-05-15 00:41:09.465525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.465704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.458 [2024-05-15 00:41:09.465733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.459 [2024-05-15 00:41:09.465750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.459 [2024-05-15 00:41:09.466002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.459 [2024-05-15 00:41:09.466249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.459 [2024-05-15 00:41:09.466272] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.459 [2024-05-15 00:41:09.466288] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.459 [2024-05-15 00:41:09.469908] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.459 [2024-05-15 00:41:09.479116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.459 [2024-05-15 00:41:09.479558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.479794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.479823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.459 [2024-05-15 00:41:09.479840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.459 [2024-05-15 00:41:09.480092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.459 [2024-05-15 00:41:09.480338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.459 [2024-05-15 00:41:09.480361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.459 [2024-05-15 00:41:09.480377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.459 [2024-05-15 00:41:09.484006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.459 [2024-05-15 00:41:09.493200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.459 [2024-05-15 00:41:09.493626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.493915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.493953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.459 [2024-05-15 00:41:09.493977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.459 [2024-05-15 00:41:09.494219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.459 [2024-05-15 00:41:09.494464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.459 [2024-05-15 00:41:09.494487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.459 [2024-05-15 00:41:09.494503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.459 [2024-05-15 00:41:09.498135] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.459 [2024-05-15 00:41:09.507122] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.459 [2024-05-15 00:41:09.507584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.507797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.507825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.459 [2024-05-15 00:41:09.507842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.459 [2024-05-15 00:41:09.508095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.459 [2024-05-15 00:41:09.508342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.459 [2024-05-15 00:41:09.508365] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.459 [2024-05-15 00:41:09.508381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.459 [2024-05-15 00:41:09.512015] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.459 [2024-05-15 00:41:09.521025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.459 [2024-05-15 00:41:09.521477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.521720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.521749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.459 [2024-05-15 00:41:09.521766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.459 [2024-05-15 00:41:09.522020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.459 [2024-05-15 00:41:09.522265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.459 [2024-05-15 00:41:09.522289] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.459 [2024-05-15 00:41:09.522304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.459 [2024-05-15 00:41:09.525935] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.459 [2024-05-15 00:41:09.534910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.459 [2024-05-15 00:41:09.535396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.535737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.535786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.459 [2024-05-15 00:41:09.535803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.459 [2024-05-15 00:41:09.536064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.459 [2024-05-15 00:41:09.536310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.459 [2024-05-15 00:41:09.536333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.459 [2024-05-15 00:41:09.536349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.459 [2024-05-15 00:41:09.539977] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.459 [2024-05-15 00:41:09.548957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.459 [2024-05-15 00:41:09.549404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.549748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.549805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.459 [2024-05-15 00:41:09.549822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.459 [2024-05-15 00:41:09.550075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.459 [2024-05-15 00:41:09.550321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.459 [2024-05-15 00:41:09.550344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.459 [2024-05-15 00:41:09.550360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.459 [2024-05-15 00:41:09.553987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.459 [2024-05-15 00:41:09.562966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.459 [2024-05-15 00:41:09.563436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.563673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.563702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.459 [2024-05-15 00:41:09.563719] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.459 [2024-05-15 00:41:09.563972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.459 [2024-05-15 00:41:09.564218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.459 [2024-05-15 00:41:09.564242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.459 [2024-05-15 00:41:09.564257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.459 [2024-05-15 00:41:09.567880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.459 [2024-05-15 00:41:09.576866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.459 [2024-05-15 00:41:09.577339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.577548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.577576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.459 [2024-05-15 00:41:09.577593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.459 [2024-05-15 00:41:09.577834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.459 [2024-05-15 00:41:09.578098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.459 [2024-05-15 00:41:09.578123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.459 [2024-05-15 00:41:09.578139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.459 [2024-05-15 00:41:09.581761] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.459 [2024-05-15 00:41:09.590960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.459 [2024-05-15 00:41:09.591415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.591607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.459 [2024-05-15 00:41:09.591637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.459 [2024-05-15 00:41:09.591655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.459 [2024-05-15 00:41:09.591896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.459 [2024-05-15 00:41:09.592152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.459 [2024-05-15 00:41:09.592176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.459 [2024-05-15 00:41:09.592192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.459 [2024-05-15 00:41:09.595813] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.460 [2024-05-15 00:41:09.605017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.460 [2024-05-15 00:41:09.605516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.460 [2024-05-15 00:41:09.605784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.460 [2024-05-15 00:41:09.605813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.460 [2024-05-15 00:41:09.605830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.460 [2024-05-15 00:41:09.606082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.460 [2024-05-15 00:41:09.606328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.460 [2024-05-15 00:41:09.606352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.460 [2024-05-15 00:41:09.606367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.460 [2024-05-15 00:41:09.609996] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.460 [2024-05-15 00:41:09.619012] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.460 [2024-05-15 00:41:09.619672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.619888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.619916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.747 [2024-05-15 00:41:09.619943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.747 [2024-05-15 00:41:09.620202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.747 [2024-05-15 00:41:09.620449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.747 [2024-05-15 00:41:09.620477] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.747 [2024-05-15 00:41:09.620494] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.747 [2024-05-15 00:41:09.624126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.747 [2024-05-15 00:41:09.632910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.747 [2024-05-15 00:41:09.633405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.633716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.633744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.747 [2024-05-15 00:41:09.633762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.747 [2024-05-15 00:41:09.634015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.747 [2024-05-15 00:41:09.634261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.747 [2024-05-15 00:41:09.634285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.747 [2024-05-15 00:41:09.634301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.747 [2024-05-15 00:41:09.637922] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.747 [2024-05-15 00:41:09.646923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.747 [2024-05-15 00:41:09.647593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.647902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.647939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.747 [2024-05-15 00:41:09.647959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.747 [2024-05-15 00:41:09.648200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.747 [2024-05-15 00:41:09.648446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.747 [2024-05-15 00:41:09.648469] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.747 [2024-05-15 00:41:09.648485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.747 [2024-05-15 00:41:09.652112] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.747 [2024-05-15 00:41:09.660995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.747 [2024-05-15 00:41:09.661500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.661901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.661982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.747 [2024-05-15 00:41:09.662001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.747 [2024-05-15 00:41:09.662243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.747 [2024-05-15 00:41:09.662488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.747 [2024-05-15 00:41:09.662511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.747 [2024-05-15 00:41:09.662533] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.747 [2024-05-15 00:41:09.666166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.747 [2024-05-15 00:41:09.674937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.747 [2024-05-15 00:41:09.675407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.675606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.675635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.747 [2024-05-15 00:41:09.675652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.747 [2024-05-15 00:41:09.675893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.747 [2024-05-15 00:41:09.676150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.747 [2024-05-15 00:41:09.676175] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.747 [2024-05-15 00:41:09.676190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.747 [2024-05-15 00:41:09.679815] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.747 [2024-05-15 00:41:09.689016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.747 [2024-05-15 00:41:09.689481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.689651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.689680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.747 [2024-05-15 00:41:09.689698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.747 [2024-05-15 00:41:09.689949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.747 [2024-05-15 00:41:09.690194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.747 [2024-05-15 00:41:09.690218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.747 [2024-05-15 00:41:09.690234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.747 [2024-05-15 00:41:09.693856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.747 [2024-05-15 00:41:09.703055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.747 [2024-05-15 00:41:09.703534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.703851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.747 [2024-05-15 00:41:09.703910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.748 [2024-05-15 00:41:09.703928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.748 [2024-05-15 00:41:09.704181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.748 [2024-05-15 00:41:09.704426] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.748 [2024-05-15 00:41:09.704449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.748 [2024-05-15 00:41:09.704465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.748 [2024-05-15 00:41:09.708128] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.748 [2024-05-15 00:41:09.717121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.748 [2024-05-15 00:41:09.717570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.717811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.717840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.748 [2024-05-15 00:41:09.717857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.748 [2024-05-15 00:41:09.718111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.748 [2024-05-15 00:41:09.718357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.748 [2024-05-15 00:41:09.718381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.748 [2024-05-15 00:41:09.718397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.748 [2024-05-15 00:41:09.722025] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.748 [2024-05-15 00:41:09.731213] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.748 [2024-05-15 00:41:09.731685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.731975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.732005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.748 [2024-05-15 00:41:09.732023] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.748 [2024-05-15 00:41:09.732263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.748 [2024-05-15 00:41:09.732509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.748 [2024-05-15 00:41:09.732532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.748 [2024-05-15 00:41:09.732547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.748 [2024-05-15 00:41:09.736177] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.748 [2024-05-15 00:41:09.745160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.748 [2024-05-15 00:41:09.745639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.745877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.745906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.748 [2024-05-15 00:41:09.745923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.748 [2024-05-15 00:41:09.746176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.748 [2024-05-15 00:41:09.746421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.748 [2024-05-15 00:41:09.746445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.748 [2024-05-15 00:41:09.746460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.748 [2024-05-15 00:41:09.750089] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.748 [2024-05-15 00:41:09.759082] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.748 [2024-05-15 00:41:09.759552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.759970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.760000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.748 [2024-05-15 00:41:09.760017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.748 [2024-05-15 00:41:09.760259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.748 [2024-05-15 00:41:09.760503] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.748 [2024-05-15 00:41:09.760527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.748 [2024-05-15 00:41:09.760542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.748 [2024-05-15 00:41:09.764171] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.748 [2024-05-15 00:41:09.773164] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.748 [2024-05-15 00:41:09.773635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.773848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.773876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.748 [2024-05-15 00:41:09.773894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.748 [2024-05-15 00:41:09.774144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.748 [2024-05-15 00:41:09.774391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.748 [2024-05-15 00:41:09.774414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.748 [2024-05-15 00:41:09.774430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.748 [2024-05-15 00:41:09.778068] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.748 [2024-05-15 00:41:09.787061] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.748 [2024-05-15 00:41:09.787509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.787883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.787947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.748 [2024-05-15 00:41:09.787967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.748 [2024-05-15 00:41:09.788208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.748 [2024-05-15 00:41:09.788453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.748 [2024-05-15 00:41:09.788477] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.748 [2024-05-15 00:41:09.788493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.748 [2024-05-15 00:41:09.792127] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.748 [2024-05-15 00:41:09.801129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.748 [2024-05-15 00:41:09.801616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.801961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.802013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.748 [2024-05-15 00:41:09.802031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.748 [2024-05-15 00:41:09.802272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.748 [2024-05-15 00:41:09.802517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.748 [2024-05-15 00:41:09.802541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.748 [2024-05-15 00:41:09.802556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.748 [2024-05-15 00:41:09.806193] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.748 [2024-05-15 00:41:09.815218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.748 [2024-05-15 00:41:09.815702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.815971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.816001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.748 [2024-05-15 00:41:09.816019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.748 [2024-05-15 00:41:09.816260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.748 [2024-05-15 00:41:09.816505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.748 [2024-05-15 00:41:09.816528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.748 [2024-05-15 00:41:09.816543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.748 [2024-05-15 00:41:09.820195] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.748 [2024-05-15 00:41:09.829208] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.748 [2024-05-15 00:41:09.829835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.748 [2024-05-15 00:41:09.830036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.749 [2024-05-15 00:41:09.830066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.749 [2024-05-15 00:41:09.830084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.749 [2024-05-15 00:41:09.830325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.749 [2024-05-15 00:41:09.830570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.749 [2024-05-15 00:41:09.830593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.749 [2024-05-15 00:41:09.830609] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.749 [2024-05-15 00:41:09.834245] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.749 [2024-05-15 00:41:09.843241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.749 [2024-05-15 00:41:09.843671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.749 [2024-05-15 00:41:09.843920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.749 [2024-05-15 00:41:09.843996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.749 [2024-05-15 00:41:09.844015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.749 [2024-05-15 00:41:09.844257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.749 [2024-05-15 00:41:09.844502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.749 [2024-05-15 00:41:09.844525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.749 [2024-05-15 00:41:09.844541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.749 [2024-05-15 00:41:09.848180] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.749 [2024-05-15 00:41:09.857181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.749 [2024-05-15 00:41:09.857733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.749 [2024-05-15 00:41:09.857998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.749 [2024-05-15 00:41:09.858027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.749 [2024-05-15 00:41:09.858045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.749 [2024-05-15 00:41:09.858293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.749 [2024-05-15 00:41:09.858538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.749 [2024-05-15 00:41:09.858562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.749 [2024-05-15 00:41:09.858578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.749 [2024-05-15 00:41:09.862225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.749 [2024-05-15 00:41:09.871222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.749 [2024-05-15 00:41:09.871827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.749 [2024-05-15 00:41:09.872059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.749 [2024-05-15 00:41:09.872089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.749 [2024-05-15 00:41:09.872106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.749 [2024-05-15 00:41:09.872347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.749 [2024-05-15 00:41:09.872592] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.749 [2024-05-15 00:41:09.872616] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.749 [2024-05-15 00:41:09.872631] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.749 [2024-05-15 00:41:09.876289] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.749 [2024-05-15 00:41:09.885288] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.749 [2024-05-15 00:41:09.885962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.749 [2024-05-15 00:41:09.886206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.749 [2024-05-15 00:41:09.886234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.749 [2024-05-15 00:41:09.886257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.749 [2024-05-15 00:41:09.886499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.749 [2024-05-15 00:41:09.886744] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.749 [2024-05-15 00:41:09.886768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.749 [2024-05-15 00:41:09.886784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.749 [2024-05-15 00:41:09.890431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.749 [2024-05-15 00:41:09.899242] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.749 [2024-05-15 00:41:09.899802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.749 [2024-05-15 00:41:09.900022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.749 [2024-05-15 00:41:09.900052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:43.749 [2024-05-15 00:41:09.900069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:43.749 [2024-05-15 00:41:09.900310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:43.749 [2024-05-15 00:41:09.900556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.749 [2024-05-15 00:41:09.900585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.749 [2024-05-15 00:41:09.900600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.749 [2024-05-15 00:41:09.904258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.009 [2024-05-15 00:41:09.913199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.009 [2024-05-15 00:41:09.913628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.913843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.913872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.009 [2024-05-15 00:41:09.913890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.009 [2024-05-15 00:41:09.914143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.009 [2024-05-15 00:41:09.914389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.009 [2024-05-15 00:41:09.914413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.009 [2024-05-15 00:41:09.914429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.009 [2024-05-15 00:41:09.918077] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.009 [2024-05-15 00:41:09.927289] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.009 [2024-05-15 00:41:09.927818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.928046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.928076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.009 [2024-05-15 00:41:09.928094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.009 [2024-05-15 00:41:09.928342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.009 [2024-05-15 00:41:09.928588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.009 [2024-05-15 00:41:09.928612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.009 [2024-05-15 00:41:09.928628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.009 [2024-05-15 00:41:09.932262] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.009 [2024-05-15 00:41:09.941255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.009 [2024-05-15 00:41:09.941795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.941988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.942017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.009 [2024-05-15 00:41:09.942034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.009 [2024-05-15 00:41:09.942275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.009 [2024-05-15 00:41:09.942521] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.009 [2024-05-15 00:41:09.942544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.009 [2024-05-15 00:41:09.942560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.009 [2024-05-15 00:41:09.946196] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.009 [2024-05-15 00:41:09.955201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.009 [2024-05-15 00:41:09.955693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.955909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.955945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.009 [2024-05-15 00:41:09.955965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.009 [2024-05-15 00:41:09.956205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.009 [2024-05-15 00:41:09.956451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.009 [2024-05-15 00:41:09.956474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.009 [2024-05-15 00:41:09.956490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.009 [2024-05-15 00:41:09.960120] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.009 [2024-05-15 00:41:09.969100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.009 [2024-05-15 00:41:09.969542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.969812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.969842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.009 [2024-05-15 00:41:09.969859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.009 [2024-05-15 00:41:09.970115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.009 [2024-05-15 00:41:09.970368] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.009 [2024-05-15 00:41:09.970392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.009 [2024-05-15 00:41:09.970407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.009 [2024-05-15 00:41:09.974043] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.009 [2024-05-15 00:41:09.983037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.009 [2024-05-15 00:41:09.983493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.983843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.983895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.009 [2024-05-15 00:41:09.983912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.009 [2024-05-15 00:41:09.984163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.009 [2024-05-15 00:41:09.984409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.009 [2024-05-15 00:41:09.984433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.009 [2024-05-15 00:41:09.984448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.009 [2024-05-15 00:41:09.988075] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.009 [2024-05-15 00:41:09.997055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.009 [2024-05-15 00:41:09.997580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.997824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.009 [2024-05-15 00:41:09.997875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.009 [2024-05-15 00:41:09.997893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.009 [2024-05-15 00:41:09.998143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.009 [2024-05-15 00:41:09.998389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.009 [2024-05-15 00:41:09.998412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.009 [2024-05-15 00:41:09.998428] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.010 [2024-05-15 00:41:10.002056] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.010 [2024-05-15 00:41:10.011045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.010 [2024-05-15 00:41:10.011514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.011850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.011904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.010 [2024-05-15 00:41:10.011922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.010 [2024-05-15 00:41:10.012172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.010 [2024-05-15 00:41:10.012418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.010 [2024-05-15 00:41:10.012455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.010 [2024-05-15 00:41:10.012482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.010 [2024-05-15 00:41:10.016360] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.010 [2024-05-15 00:41:10.025137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.010 [2024-05-15 00:41:10.025742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.025991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.026022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.010 [2024-05-15 00:41:10.026039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.010 [2024-05-15 00:41:10.026281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.010 [2024-05-15 00:41:10.026527] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.010 [2024-05-15 00:41:10.026551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.010 [2024-05-15 00:41:10.026566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.010 [2024-05-15 00:41:10.030196] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.010 [2024-05-15 00:41:10.039184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.010 [2024-05-15 00:41:10.039635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.039944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.039974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.010 [2024-05-15 00:41:10.039992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.010 [2024-05-15 00:41:10.040233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.010 [2024-05-15 00:41:10.040479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.010 [2024-05-15 00:41:10.040502] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.010 [2024-05-15 00:41:10.040517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.010 [2024-05-15 00:41:10.044163] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.010 [2024-05-15 00:41:10.053168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.010 [2024-05-15 00:41:10.053631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.053819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.053849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.010 [2024-05-15 00:41:10.053866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.010 [2024-05-15 00:41:10.054161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.010 [2024-05-15 00:41:10.054427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.010 [2024-05-15 00:41:10.054453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.010 [2024-05-15 00:41:10.054475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.010 [2024-05-15 00:41:10.058116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.010 [2024-05-15 00:41:10.067104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.010 [2024-05-15 00:41:10.067528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.067796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.067825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.010 [2024-05-15 00:41:10.067843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.010 [2024-05-15 00:41:10.068095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.010 [2024-05-15 00:41:10.068341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.010 [2024-05-15 00:41:10.068366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.010 [2024-05-15 00:41:10.068381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.010 [2024-05-15 00:41:10.072015] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.010 [2024-05-15 00:41:10.081017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.010 [2024-05-15 00:41:10.081505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.081838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.081886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.010 [2024-05-15 00:41:10.081904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.010 [2024-05-15 00:41:10.082155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.010 [2024-05-15 00:41:10.082402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.010 [2024-05-15 00:41:10.082425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.010 [2024-05-15 00:41:10.082441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.010 [2024-05-15 00:41:10.086073] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.010 [2024-05-15 00:41:10.095062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.010 [2024-05-15 00:41:10.095605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.095906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.095942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.010 [2024-05-15 00:41:10.095962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.010 [2024-05-15 00:41:10.096204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.010 [2024-05-15 00:41:10.096449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.010 [2024-05-15 00:41:10.096472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.010 [2024-05-15 00:41:10.096487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.010 [2024-05-15 00:41:10.100124] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.010 [2024-05-15 00:41:10.109117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.010 [2024-05-15 00:41:10.109770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.110012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 [2024-05-15 00:41:10.110042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.010 [2024-05-15 00:41:10.110059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.010 [2024-05-15 00:41:10.110300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.010 [2024-05-15 00:41:10.110545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.010 [2024-05-15 00:41:10.110568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.010 [2024-05-15 00:41:10.110584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.010 [2024-05-15 00:41:10.114220] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 989440 Killed "${NVMF_APP[@]}" "$@" 00:25:44.010 [2024-05-15 00:41:10.123216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.010 00:41:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:44.010 00:41:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:44.010 [2024-05-15 00:41:10.123677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 00:41:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:44.010 [2024-05-15 00:41:10.123859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.010 00:41:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:44.010 [2024-05-15 00:41:10.123888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.010 [2024-05-15 00:41:10.123906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.010 00:41:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.010 [2024-05-15 00:41:10.124157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.010 [2024-05-15 00:41:10.124402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.010 [2024-05-15 00:41:10.124426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.010 [2024-05-15 00:41:10.124441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.010 00:41:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=990512 00:25:44.010 00:41:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:44.011 [2024-05-15 00:41:10.128076] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.011 00:41:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 990512 00:25:44.011 00:41:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 990512 ']' 00:25:44.011 00:41:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.011 00:41:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:44.011 00:41:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.011 00:41:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:44.011 00:41:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.011 [2024-05-15 00:41:10.137287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.011 [2024-05-15 00:41:10.137749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.011 [2024-05-15 00:41:10.137948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.011 [2024-05-15 00:41:10.137980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.011 [2024-05-15 00:41:10.137999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.011 [2024-05-15 00:41:10.138242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.011 [2024-05-15 00:41:10.138487] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.011 [2024-05-15 00:41:10.138511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.011 [2024-05-15 00:41:10.138526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.011 [2024-05-15 00:41:10.142162] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.011 [2024-05-15 00:41:10.151370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.011 [2024-05-15 00:41:10.151811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.011 [2024-05-15 00:41:10.151990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.011 [2024-05-15 00:41:10.152019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.011 [2024-05-15 00:41:10.152037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.011 [2024-05-15 00:41:10.152278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.011 [2024-05-15 00:41:10.152523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.011 [2024-05-15 00:41:10.152546] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.011 [2024-05-15 00:41:10.152562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.011 [2024-05-15 00:41:10.156202] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.011 [2024-05-15 00:41:10.165311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.011 [2024-05-15 00:41:10.165759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.011 [2024-05-15 00:41:10.165977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.011 [2024-05-15 00:41:10.166007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.011 [2024-05-15 00:41:10.166025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.011 [2024-05-15 00:41:10.166267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.011 [2024-05-15 00:41:10.166512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.011 [2024-05-15 00:41:10.166535] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.011 [2024-05-15 00:41:10.166551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.011 [2024-05-15 00:41:10.170233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.271 [2024-05-15 00:41:10.176946] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:44.271 [2024-05-15 00:41:10.177037] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.271 [2024-05-15 00:41:10.179283] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.271 [2024-05-15 00:41:10.179765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.179989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.180019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.271 [2024-05-15 00:41:10.180038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.271 [2024-05-15 00:41:10.180280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.271 [2024-05-15 00:41:10.180525] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.271 [2024-05-15 00:41:10.180550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.271 [2024-05-15 00:41:10.180566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.271 [2024-05-15 00:41:10.184203] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.271 [2024-05-15 00:41:10.193203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.271 [2024-05-15 00:41:10.193684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.193881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.193910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.271 [2024-05-15 00:41:10.193928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.271 [2024-05-15 00:41:10.194181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.271 [2024-05-15 00:41:10.194427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.271 [2024-05-15 00:41:10.194451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.271 [2024-05-15 00:41:10.194466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.271 [2024-05-15 00:41:10.198102] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.271 [2024-05-15 00:41:10.207097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.271 [2024-05-15 00:41:10.207594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.207807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.207836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.271 [2024-05-15 00:41:10.207853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.271 [2024-05-15 00:41:10.208105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.271 [2024-05-15 00:41:10.208351] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.271 [2024-05-15 00:41:10.208375] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.271 [2024-05-15 00:41:10.208397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.271 [2024-05-15 00:41:10.212038] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.271 [2024-05-15 00:41:10.221068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.271 [2024-05-15 00:41:10.221521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.221734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.221763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.271 [2024-05-15 00:41:10.221780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.271 [2024-05-15 00:41:10.222035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.271 [2024-05-15 00:41:10.222282] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.271 [2024-05-15 00:41:10.222305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.271 [2024-05-15 00:41:10.222321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.271 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.271 [2024-05-15 00:41:10.225955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.271 [2024-05-15 00:41:10.235173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.271 [2024-05-15 00:41:10.235631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.235849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.235881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.271 [2024-05-15 00:41:10.235899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.271 [2024-05-15 00:41:10.236153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.271 [2024-05-15 00:41:10.236399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.271 [2024-05-15 00:41:10.236423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.271 [2024-05-15 00:41:10.236439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.271 [2024-05-15 00:41:10.240072] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.271 [2024-05-15 00:41:10.249282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.271 [2024-05-15 00:41:10.249771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.250000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.250030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.271 [2024-05-15 00:41:10.250048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.271 [2024-05-15 00:41:10.250289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.271 [2024-05-15 00:41:10.250534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.271 [2024-05-15 00:41:10.250557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.271 [2024-05-15 00:41:10.250578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.271 [2024-05-15 00:41:10.254222] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.271 [2024-05-15 00:41:10.262742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.271 [2024-05-15 00:41:10.263188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.271 [2024-05-15 00:41:10.263392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.263419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.272 [2024-05-15 00:41:10.263435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.272 [2024-05-15 00:41:10.263677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.272 [2024-05-15 00:41:10.263900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.272 [2024-05-15 00:41:10.263943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.272 [2024-05-15 00:41:10.263959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.272 [2024-05-15 00:41:10.267069] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.272 [2024-05-15 00:41:10.270899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:44.272 [2024-05-15 00:41:10.276254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.272 [2024-05-15 00:41:10.276774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.277006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.277034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.272 [2024-05-15 00:41:10.277052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.272 [2024-05-15 00:41:10.277291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.272 [2024-05-15 00:41:10.277524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.272 [2024-05-15 00:41:10.277545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.272 [2024-05-15 00:41:10.277562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.272 [2024-05-15 00:41:10.280818] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.272 [2024-05-15 00:41:10.289740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.272 [2024-05-15 00:41:10.290321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.290533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.290560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.272 [2024-05-15 00:41:10.290581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.272 [2024-05-15 00:41:10.290831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.272 [2024-05-15 00:41:10.291077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.272 [2024-05-15 00:41:10.291100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.272 [2024-05-15 00:41:10.291118] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.272 [2024-05-15 00:41:10.294369] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.272 [2024-05-15 00:41:10.303253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.272 [2024-05-15 00:41:10.303690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.303910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.303944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.272 [2024-05-15 00:41:10.303963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.272 [2024-05-15 00:41:10.304196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.272 [2024-05-15 00:41:10.304421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.272 [2024-05-15 00:41:10.304442] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.272 [2024-05-15 00:41:10.304471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.272 [2024-05-15 00:41:10.307660] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.272 [2024-05-15 00:41:10.316768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.272 [2024-05-15 00:41:10.317225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.317454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.317480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.272 [2024-05-15 00:41:10.317495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.272 [2024-05-15 00:41:10.317713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.272 [2024-05-15 00:41:10.317954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.272 [2024-05-15 00:41:10.317975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.272 [2024-05-15 00:41:10.317989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.272 [2024-05-15 00:41:10.321111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.272 [2024-05-15 00:41:10.330281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.272 [2024-05-15 00:41:10.330775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.330984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.331024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.272 [2024-05-15 00:41:10.331043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.272 [2024-05-15 00:41:10.331282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.272 [2024-05-15 00:41:10.331507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.272 [2024-05-15 00:41:10.331528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.272 [2024-05-15 00:41:10.331543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.272 [2024-05-15 00:41:10.334665] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.272 [2024-05-15 00:41:10.343804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.272 [2024-05-15 00:41:10.344480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.344687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.344714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.272 [2024-05-15 00:41:10.344736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.272 [2024-05-15 00:41:10.344998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.272 [2024-05-15 00:41:10.345257] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.272 [2024-05-15 00:41:10.345280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.272 [2024-05-15 00:41:10.345299] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.272 [2024-05-15 00:41:10.348534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.272 [2024-05-15 00:41:10.357303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.272 [2024-05-15 00:41:10.357788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.358014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.358041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.272 [2024-05-15 00:41:10.358058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.272 [2024-05-15 00:41:10.358291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.272 [2024-05-15 00:41:10.358507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.272 [2024-05-15 00:41:10.358527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.272 [2024-05-15 00:41:10.358557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.272 [2024-05-15 00:41:10.361700] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.272 [2024-05-15 00:41:10.370942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.272 [2024-05-15 00:41:10.371371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.371572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.371598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.272 [2024-05-15 00:41:10.371614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.272 [2024-05-15 00:41:10.371844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.272 [2024-05-15 00:41:10.372103] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.272 [2024-05-15 00:41:10.372125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.272 [2024-05-15 00:41:10.372140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.272 [2024-05-15 00:41:10.375434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.272 [2024-05-15 00:41:10.384525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.272 [2024-05-15 00:41:10.384993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.385162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.272 [2024-05-15 00:41:10.385188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.272 [2024-05-15 00:41:10.385204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.272 [2024-05-15 00:41:10.385421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.272 [2024-05-15 00:41:10.385666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.273 [2024-05-15 00:41:10.385687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.273 [2024-05-15 00:41:10.385700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.273 [2024-05-15 00:41:10.387624] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.273 [2024-05-15 00:41:10.387656] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.273 [2024-05-15 00:41:10.387685] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:44.273 [2024-05-15 00:41:10.387697] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:44.273 [2024-05-15 00:41:10.387706] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.273 [2024-05-15 00:41:10.387764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.273 [2024-05-15 00:41:10.387825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:44.273 [2024-05-15 00:41:10.387827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.273 [2024-05-15 00:41:10.389076] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.273 [2024-05-15 00:41:10.398246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.273 [2024-05-15 00:41:10.398871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.273 [2024-05-15 00:41:10.399108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.273 [2024-05-15 00:41:10.399136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.273 [2024-05-15 00:41:10.399157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.273 [2024-05-15 00:41:10.399400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.273 [2024-05-15 00:41:10.399622] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.273 [2024-05-15 00:41:10.399644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.273 [2024-05-15 00:41:10.399662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.273 [2024-05-15 00:41:10.402939] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.273 [2024-05-15 00:41:10.411937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.273 [2024-05-15 00:41:10.412530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.273 [2024-05-15 00:41:10.412760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.273 [2024-05-15 00:41:10.412787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.273 [2024-05-15 00:41:10.412808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.273 [2024-05-15 00:41:10.413044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.273 [2024-05-15 00:41:10.413290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.273 [2024-05-15 00:41:10.413315] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.273 [2024-05-15 00:41:10.413333] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.273 [2024-05-15 00:41:10.416664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.273 [2024-05-15 00:41:10.425636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.273 [2024-05-15 00:41:10.426225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.273 [2024-05-15 00:41:10.426445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.273 [2024-05-15 00:41:10.426474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.273 [2024-05-15 00:41:10.426496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.273 [2024-05-15 00:41:10.426738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.273 [2024-05-15 00:41:10.426988] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.273 [2024-05-15 00:41:10.427011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.273 [2024-05-15 00:41:10.427030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.273 [2024-05-15 00:41:10.430461] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.532 [2024-05-15 00:41:10.439539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.532 [2024-05-15 00:41:10.440067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.532 [2024-05-15 00:41:10.440296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.532 [2024-05-15 00:41:10.440323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.532 [2024-05-15 00:41:10.440345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.532 [2024-05-15 00:41:10.440585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.532 [2024-05-15 00:41:10.440806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.532 [2024-05-15 00:41:10.440827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.532 [2024-05-15 00:41:10.440845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.532 [2024-05-15 00:41:10.444129] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.532 [2024-05-15 00:41:10.453108] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.532 [2024-05-15 00:41:10.453732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.532 [2024-05-15 00:41:10.453969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.532 [2024-05-15 00:41:10.453999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.532 [2024-05-15 00:41:10.454021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.532 [2024-05-15 00:41:10.454247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.532 [2024-05-15 00:41:10.454485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.533 [2024-05-15 00:41:10.454517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.533 [2024-05-15 00:41:10.454535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.533 [2024-05-15 00:41:10.457886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.533 [2024-05-15 00:41:10.466760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.533 [2024-05-15 00:41:10.467352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.467564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.467591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.533 [2024-05-15 00:41:10.467612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.533 [2024-05-15 00:41:10.467854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.533 [2024-05-15 00:41:10.468105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.533 [2024-05-15 00:41:10.468128] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.533 [2024-05-15 00:41:10.468145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.533 [2024-05-15 00:41:10.471492] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.533 [2024-05-15 00:41:10.480578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.533 [2024-05-15 00:41:10.481056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.481264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.481290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.533 [2024-05-15 00:41:10.481307] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.533 [2024-05-15 00:41:10.481525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.533 [2024-05-15 00:41:10.481746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.533 [2024-05-15 00:41:10.481768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.533 [2024-05-15 00:41:10.481783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.533 [2024-05-15 00:41:10.485096] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.533 [2024-05-15 00:41:10.494290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.533 [2024-05-15 00:41:10.494756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.494926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.494958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.533 [2024-05-15 00:41:10.494975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.533 [2024-05-15 00:41:10.495191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.533 [2024-05-15 00:41:10.495421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.533 [2024-05-15 00:41:10.495442] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.533 [2024-05-15 00:41:10.495467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.533 [2024-05-15 00:41:10.498699] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.533 [2024-05-15 00:41:10.507800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.533 [2024-05-15 00:41:10.508245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.508445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.508471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.533 [2024-05-15 00:41:10.508487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.533 [2024-05-15 00:41:10.508716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.533 [2024-05-15 00:41:10.508954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.533 [2024-05-15 00:41:10.508976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.533 [2024-05-15 00:41:10.508990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.533 [2024-05-15 00:41:10.512266] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.533 [2024-05-15 00:41:10.521360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.533 [2024-05-15 00:41:10.521778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.521971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.521998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.533 [2024-05-15 00:41:10.522014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.533 [2024-05-15 00:41:10.522231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.533 [2024-05-15 00:41:10.522460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.533 [2024-05-15 00:41:10.522481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.533 [2024-05-15 00:41:10.522494] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.533 [2024-05-15 00:41:10.525736] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.533 [2024-05-15 00:41:10.534885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.533 [2024-05-15 00:41:10.535349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.535505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.535531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.533 [2024-05-15 00:41:10.535547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.533 [2024-05-15 00:41:10.535764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.533 [2024-05-15 00:41:10.536025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.533 [2024-05-15 00:41:10.536047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.533 [2024-05-15 00:41:10.536060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.533 [2024-05-15 00:41:10.539316] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.533 [2024-05-15 00:41:10.548427] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.533 [2024-05-15 00:41:10.548845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.549041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.549068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.533 [2024-05-15 00:41:10.549084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.533 [2024-05-15 00:41:10.549302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.533 [2024-05-15 00:41:10.549530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.533 [2024-05-15 00:41:10.549550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.533 [2024-05-15 00:41:10.549564] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.533 [2024-05-15 00:41:10.552769] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.533 [2024-05-15 00:41:10.561895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.533 [2024-05-15 00:41:10.562338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.562507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.562531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.533 [2024-05-15 00:41:10.562546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.533 [2024-05-15 00:41:10.562778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.533 [2024-05-15 00:41:10.563023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.533 [2024-05-15 00:41:10.563046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.533 [2024-05-15 00:41:10.563060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.533 [2024-05-15 00:41:10.566309] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.533 [2024-05-15 00:41:10.575415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.533 [2024-05-15 00:41:10.575867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.576043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.533 [2024-05-15 00:41:10.576071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.534 [2024-05-15 00:41:10.576087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.534 [2024-05-15 00:41:10.576317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.534 [2024-05-15 00:41:10.576531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.534 [2024-05-15 00:41:10.576551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.534 [2024-05-15 00:41:10.576564] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.534 [2024-05-15 00:41:10.579797] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.534 [2024-05-15 00:41:10.589060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.534 [2024-05-15 00:41:10.589493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.589708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.589733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.534 [2024-05-15 00:41:10.589749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.534 [2024-05-15 00:41:10.589974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.534 [2024-05-15 00:41:10.590196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.534 [2024-05-15 00:41:10.590217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.534 [2024-05-15 00:41:10.590246] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.534 [2024-05-15 00:41:10.593488] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.534 [2024-05-15 00:41:10.602635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.534 [2024-05-15 00:41:10.603054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.603249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.603275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.534 [2024-05-15 00:41:10.603291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.534 [2024-05-15 00:41:10.603508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.534 [2024-05-15 00:41:10.603738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.534 [2024-05-15 00:41:10.603758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.534 [2024-05-15 00:41:10.603772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.534 [2024-05-15 00:41:10.607030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.534 [2024-05-15 00:41:10.616142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.534 [2024-05-15 00:41:10.616580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.616800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.616825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.534 [2024-05-15 00:41:10.616841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.534 [2024-05-15 00:41:10.617066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.534 [2024-05-15 00:41:10.617300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.534 [2024-05-15 00:41:10.617321] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.534 [2024-05-15 00:41:10.617335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.534 [2024-05-15 00:41:10.620539] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.534 [2024-05-15 00:41:10.629664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.534 [2024-05-15 00:41:10.630113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.630308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.630334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.534 [2024-05-15 00:41:10.630349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.534 [2024-05-15 00:41:10.630566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.534 [2024-05-15 00:41:10.630795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.534 [2024-05-15 00:41:10.630816] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.534 [2024-05-15 00:41:10.630829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.534 [2024-05-15 00:41:10.634097] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.534 [2024-05-15 00:41:10.643278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.534 [2024-05-15 00:41:10.643695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.643853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.643879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.534 [2024-05-15 00:41:10.643894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.534 [2024-05-15 00:41:10.644119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.534 [2024-05-15 00:41:10.644352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.534 [2024-05-15 00:41:10.644373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.534 [2024-05-15 00:41:10.644387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.534 [2024-05-15 00:41:10.647631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.534 [2024-05-15 00:41:10.656706] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.534 [2024-05-15 00:41:10.657144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.657332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.657358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.534 [2024-05-15 00:41:10.657374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.534 [2024-05-15 00:41:10.657590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.534 [2024-05-15 00:41:10.657819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.534 [2024-05-15 00:41:10.657839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.534 [2024-05-15 00:41:10.657853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.534 [2024-05-15 00:41:10.661086] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.534 [2024-05-15 00:41:10.670371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.534 [2024-05-15 00:41:10.670769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.670972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.671000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.534 [2024-05-15 00:41:10.671016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.534 [2024-05-15 00:41:10.671232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.534 [2024-05-15 00:41:10.671456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.534 [2024-05-15 00:41:10.671478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.534 [2024-05-15 00:41:10.671493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.534 [2024-05-15 00:41:10.674818] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.534 [2024-05-15 00:41:10.683897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.534 [2024-05-15 00:41:10.684327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.684481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.534 [2024-05-15 00:41:10.684507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.534 [2024-05-15 00:41:10.684523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.534 [2024-05-15 00:41:10.684754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.534 [2024-05-15 00:41:10.684996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.534 [2024-05-15 00:41:10.685018] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.534 [2024-05-15 00:41:10.685032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.534 [2024-05-15 00:41:10.688318] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.794 [2024-05-15 00:41:10.697557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.794 [2024-05-15 00:41:10.697991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.698166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.698192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.794 [2024-05-15 00:41:10.698208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.794 [2024-05-15 00:41:10.698426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.794 [2024-05-15 00:41:10.698655] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.794 [2024-05-15 00:41:10.698675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.794 [2024-05-15 00:41:10.698689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.794 [2024-05-15 00:41:10.702055] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.794 [2024-05-15 00:41:10.711158] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.794 [2024-05-15 00:41:10.711586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.711776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.711802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.794 [2024-05-15 00:41:10.711823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.794 [2024-05-15 00:41:10.712050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.794 [2024-05-15 00:41:10.712286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.794 [2024-05-15 00:41:10.712307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.794 [2024-05-15 00:41:10.712321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.794 [2024-05-15 00:41:10.715523] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.794 [2024-05-15 00:41:10.724626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.794 [2024-05-15 00:41:10.725056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.725217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.725243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.794 [2024-05-15 00:41:10.725258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.794 [2024-05-15 00:41:10.725490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.794 [2024-05-15 00:41:10.725704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.794 [2024-05-15 00:41:10.725724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.794 [2024-05-15 00:41:10.725737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.794 [2024-05-15 00:41:10.728990] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.794 [2024-05-15 00:41:10.738112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.794 [2024-05-15 00:41:10.738563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.738716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.738741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.794 [2024-05-15 00:41:10.738757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.794 [2024-05-15 00:41:10.738982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.794 [2024-05-15 00:41:10.739203] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.794 [2024-05-15 00:41:10.739224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.794 [2024-05-15 00:41:10.739253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.794 [2024-05-15 00:41:10.742483] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.794 [2024-05-15 00:41:10.751605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.794 [2024-05-15 00:41:10.752038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.752201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.752227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.794 [2024-05-15 00:41:10.752242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.794 [2024-05-15 00:41:10.752477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.794 [2024-05-15 00:41:10.752690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.794 [2024-05-15 00:41:10.752711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.794 [2024-05-15 00:41:10.752724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.794 [2024-05-15 00:41:10.755948] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.794 [2024-05-15 00:41:10.765236] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.794 [2024-05-15 00:41:10.765655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.765820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.765846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.794 [2024-05-15 00:41:10.765861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.794 [2024-05-15 00:41:10.766087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.794 [2024-05-15 00:41:10.766321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.794 [2024-05-15 00:41:10.766342] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.794 [2024-05-15 00:41:10.766356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.794 [2024-05-15 00:41:10.769625] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.794 [2024-05-15 00:41:10.778816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.794 [2024-05-15 00:41:10.779233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.779416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.779441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.794 [2024-05-15 00:41:10.779457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.794 [2024-05-15 00:41:10.779674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.794 [2024-05-15 00:41:10.779904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.794 [2024-05-15 00:41:10.779925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.794 [2024-05-15 00:41:10.779963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.794 [2024-05-15 00:41:10.783196] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.794 [2024-05-15 00:41:10.792391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.794 [2024-05-15 00:41:10.792808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.792997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.794 [2024-05-15 00:41:10.793023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.794 [2024-05-15 00:41:10.793039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.794 [2024-05-15 00:41:10.793255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.794 [2024-05-15 00:41:10.793490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.794 [2024-05-15 00:41:10.793510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.794 [2024-05-15 00:41:10.793524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.794 [2024-05-15 00:41:10.796809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.795 [2024-05-15 00:41:10.805959] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.795 [2024-05-15 00:41:10.806427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.806594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.806619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.795 [2024-05-15 00:41:10.806635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.795 [2024-05-15 00:41:10.806851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.795 [2024-05-15 00:41:10.807112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.795 [2024-05-15 00:41:10.807134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.795 [2024-05-15 00:41:10.807148] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.795 [2024-05-15 00:41:10.810409] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.795 [2024-05-15 00:41:10.819568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.795 [2024-05-15 00:41:10.819984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.820145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.820171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.795 [2024-05-15 00:41:10.820186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.795 [2024-05-15 00:41:10.820419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.795 [2024-05-15 00:41:10.820632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.795 [2024-05-15 00:41:10.820653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.795 [2024-05-15 00:41:10.820666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.795 [2024-05-15 00:41:10.823870] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.795 [2024-05-15 00:41:10.833163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.795 [2024-05-15 00:41:10.833582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.833798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.833823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.795 [2024-05-15 00:41:10.833839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.795 [2024-05-15 00:41:10.834066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.795 [2024-05-15 00:41:10.834305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.795 [2024-05-15 00:41:10.834327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.795 [2024-05-15 00:41:10.834340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.795 [2024-05-15 00:41:10.837585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.795 [2024-05-15 00:41:10.846700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.795 [2024-05-15 00:41:10.847119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.847279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.847305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.795 [2024-05-15 00:41:10.847320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.795 [2024-05-15 00:41:10.847537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.795 [2024-05-15 00:41:10.847767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.795 [2024-05-15 00:41:10.847787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.795 [2024-05-15 00:41:10.847801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.795 [2024-05-15 00:41:10.851111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.795 [2024-05-15 00:41:10.860245] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.795 [2024-05-15 00:41:10.860686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.860877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.860903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.795 [2024-05-15 00:41:10.860918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.795 [2024-05-15 00:41:10.861143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.795 [2024-05-15 00:41:10.861374] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.795 [2024-05-15 00:41:10.861396] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.795 [2024-05-15 00:41:10.861409] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.795 [2024-05-15 00:41:10.864652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.795 [2024-05-15 00:41:10.873763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.795 [2024-05-15 00:41:10.874192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.874368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.874393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.795 [2024-05-15 00:41:10.874409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.795 [2024-05-15 00:41:10.874626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.795 [2024-05-15 00:41:10.874856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.795 [2024-05-15 00:41:10.874876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.795 [2024-05-15 00:41:10.874895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.795 [2024-05-15 00:41:10.878131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.795 [2024-05-15 00:41:10.887398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.795 [2024-05-15 00:41:10.887838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.888010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.888038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.795 [2024-05-15 00:41:10.888054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.795 [2024-05-15 00:41:10.888286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.795 [2024-05-15 00:41:10.888501] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.795 [2024-05-15 00:41:10.888522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.795 [2024-05-15 00:41:10.888535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.795 [2024-05-15 00:41:10.891777] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.795 [2024-05-15 00:41:10.900866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.795 [2024-05-15 00:41:10.901305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.901520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.901546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.795 [2024-05-15 00:41:10.901562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.795 [2024-05-15 00:41:10.901778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.795 [2024-05-15 00:41:10.902036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.795 [2024-05-15 00:41:10.902059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.795 [2024-05-15 00:41:10.902072] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.795 [2024-05-15 00:41:10.905335] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.795 [2024-05-15 00:41:10.914506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.795 [2024-05-15 00:41:10.914943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.915129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.915154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.795 [2024-05-15 00:41:10.915170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.795 [2024-05-15 00:41:10.915400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.795 [2024-05-15 00:41:10.915614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.795 [2024-05-15 00:41:10.915634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.795 [2024-05-15 00:41:10.915652] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.795 [2024-05-15 00:41:10.918940] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.795 [2024-05-15 00:41:10.928041] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.795 [2024-05-15 00:41:10.928453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.928646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.795 [2024-05-15 00:41:10.928673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.795 [2024-05-15 00:41:10.928688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.795 [2024-05-15 00:41:10.928911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.795 [2024-05-15 00:41:10.929152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.796 [2024-05-15 00:41:10.929175] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.796 [2024-05-15 00:41:10.929189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.796 [2024-05-15 00:41:10.932498] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.796 [2024-05-15 00:41:10.941668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.796 [2024-05-15 00:41:10.942107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.796 [2024-05-15 00:41:10.942273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.796 [2024-05-15 00:41:10.942299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:44.796 [2024-05-15 00:41:10.942315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:44.796 [2024-05-15 00:41:10.942533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:44.796 [2024-05-15 00:41:10.942763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.796 [2024-05-15 00:41:10.942784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.796 [2024-05-15 00:41:10.942798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.796 [2024-05-15 00:41:10.946141] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.796 [2024-05-15 00:41:10.955436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.796 [2024-05-15 00:41:10.955847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.055 [2024-05-15 00:41:10.956041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.055 [2024-05-15 00:41:10.956068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.055 [2024-05-15 00:41:10.956084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.055 [2024-05-15 00:41:10.956319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.055 [2024-05-15 00:41:10.956546] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.055 [2024-05-15 00:41:10.956569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.055 [2024-05-15 00:41:10.956582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.055 [2024-05-15 00:41:10.959849] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.055 [2024-05-15 00:41:10.968962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.055 [2024-05-15 00:41:10.969390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.055 [2024-05-15 00:41:10.969588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.055 [2024-05-15 00:41:10.969615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.055 [2024-05-15 00:41:10.969630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.055 [2024-05-15 00:41:10.969861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.055 [2024-05-15 00:41:10.970108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.055 [2024-05-15 00:41:10.970131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.055 [2024-05-15 00:41:10.970145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.055 [2024-05-15 00:41:10.973374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.055 [2024-05-15 00:41:10.982443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.055 [2024-05-15 00:41:10.982858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.055 [2024-05-15 00:41:10.983053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.055 [2024-05-15 00:41:10.983081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.055 [2024-05-15 00:41:10.983097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.055 [2024-05-15 00:41:10.983334] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.055 [2024-05-15 00:41:10.983549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.055 [2024-05-15 00:41:10.983570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.055 [2024-05-15 00:41:10.983584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.055 [2024-05-15 00:41:10.986878] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.055 [2024-05-15 00:41:10.996015] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.055 [2024-05-15 00:41:10.996452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.055 [2024-05-15 00:41:10.996644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.055 [2024-05-15 00:41:10.996670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.055 [2024-05-15 00:41:10.996686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.055 [2024-05-15 00:41:10.996903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.055 [2024-05-15 00:41:10.997163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.055 [2024-05-15 00:41:10.997185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.055 [2024-05-15 00:41:10.997199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.055 [2024-05-15 00:41:11.000430] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.055 [2024-05-15 00:41:11.009510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.055 [2024-05-15 00:41:11.009961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.010123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.010149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.056 [2024-05-15 00:41:11.010164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.056 [2024-05-15 00:41:11.010381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.056 [2024-05-15 00:41:11.010621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.056 [2024-05-15 00:41:11.010641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.056 [2024-05-15 00:41:11.010655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.056 [2024-05-15 00:41:11.013864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.056 [2024-05-15 00:41:11.022995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.056 [2024-05-15 00:41:11.023455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.023643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.023670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.056 [2024-05-15 00:41:11.023686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.056 [2024-05-15 00:41:11.023944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.056 [2024-05-15 00:41:11.024167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.056 [2024-05-15 00:41:11.024188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.056 [2024-05-15 00:41:11.024202] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.056 [2024-05-15 00:41:11.027428] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.056 [2024-05-15 00:41:11.036525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.056 [2024-05-15 00:41:11.036985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.037153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.037180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.056 [2024-05-15 00:41:11.037196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.056 [2024-05-15 00:41:11.037428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.056 [2024-05-15 00:41:11.037642] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.056 [2024-05-15 00:41:11.037663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.056 [2024-05-15 00:41:11.037676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.056 [2024-05-15 00:41:11.040885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.056 [2024-05-15 00:41:11.049988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.056 [2024-05-15 00:41:11.050424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.050590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.050617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.056 [2024-05-15 00:41:11.050633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.056 [2024-05-15 00:41:11.050864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.056 [2024-05-15 00:41:11.051111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.056 [2024-05-15 00:41:11.051133] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.056 [2024-05-15 00:41:11.051147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.056 [2024-05-15 00:41:11.054371] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.056 [2024-05-15 00:41:11.063432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.056 [2024-05-15 00:41:11.063849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.064062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.064089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.056 [2024-05-15 00:41:11.064106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.056 [2024-05-15 00:41:11.064335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.056 [2024-05-15 00:41:11.064549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.056 [2024-05-15 00:41:11.064570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.056 [2024-05-15 00:41:11.064584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.056 [2024-05-15 00:41:11.067760] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.056 [2024-05-15 00:41:11.077058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.056 [2024-05-15 00:41:11.077527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.077744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.077770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.056 [2024-05-15 00:41:11.077786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.056 [2024-05-15 00:41:11.078013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.056 [2024-05-15 00:41:11.078249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.056 [2024-05-15 00:41:11.078271] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.056 [2024-05-15 00:41:11.078284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.056 [2024-05-15 00:41:11.081484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.056 [2024-05-15 00:41:11.090625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.056 [2024-05-15 00:41:11.091036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.091228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.091254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.056 [2024-05-15 00:41:11.091275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.056 [2024-05-15 00:41:11.091507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.056 [2024-05-15 00:41:11.091723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.056 [2024-05-15 00:41:11.091743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.056 [2024-05-15 00:41:11.091757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.056 [2024-05-15 00:41:11.095008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.056 [2024-05-15 00:41:11.104216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.056 [2024-05-15 00:41:11.104649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.104835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.104861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.056 [2024-05-15 00:41:11.104877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.056 [2024-05-15 00:41:11.105104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.056 [2024-05-15 00:41:11.105339] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.056 [2024-05-15 00:41:11.105360] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.056 [2024-05-15 00:41:11.105374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.056 [2024-05-15 00:41:11.108597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.056 [2024-05-15 00:41:11.117704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.056 [2024-05-15 00:41:11.118144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.118355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.118381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.056 [2024-05-15 00:41:11.118397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.056 [2024-05-15 00:41:11.118627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.056 [2024-05-15 00:41:11.118842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.056 [2024-05-15 00:41:11.118863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.056 [2024-05-15 00:41:11.118877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.056 [2024-05-15 00:41:11.122108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.056 [2024-05-15 00:41:11.131161] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.056 [2024-05-15 00:41:11.131596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.131753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.056 [2024-05-15 00:41:11.131780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.056 [2024-05-15 00:41:11.131795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.056 [2024-05-15 00:41:11.132030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.056 [2024-05-15 00:41:11.132267] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.057 [2024-05-15 00:41:11.132288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.057 [2024-05-15 00:41:11.132301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.057 [2024-05-15 00:41:11.135513] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.057 [2024-05-15 00:41:11.144751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.057 [2024-05-15 00:41:11.145195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 [2024-05-15 00:41:11.145385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 [2024-05-15 00:41:11.145411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.057 [2024-05-15 00:41:11.145427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.057 [2024-05-15 00:41:11.145658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.057 [2024-05-15 00:41:11.145873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.057 [2024-05-15 00:41:11.145893] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.057 [2024-05-15 00:41:11.145907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.057 [2024-05-15 00:41:11.149264] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.057 [2024-05-15 00:41:11.158416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.057 [2024-05-15 00:41:11.158808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 [2024-05-15 00:41:11.159017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 [2024-05-15 00:41:11.159044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.057 [2024-05-15 00:41:11.159060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.057 [2024-05-15 00:41:11.159284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.057 [2024-05-15 00:41:11.159506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.057 [2024-05-15 00:41:11.159527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.057 [2024-05-15 00:41:11.159541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.057 [2024-05-15 00:41:11.162821] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.057 [2024-05-15 00:41:11.172012] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:25:45.057 [2024-05-15 00:41:11.172398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:45.057 [2024-05-15 00:41:11.172610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 [2024-05-15 00:41:11.172640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:45.057 [2024-05-15 00:41:11.172657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.057 [2024-05-15 00:41:11.172874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.057 [2024-05-15 00:41:11.173108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.057 [2024-05-15 00:41:11.173130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.057 [2024-05-15 00:41:11.173144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.057 [2024-05-15 00:41:11.176489] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.057 [2024-05-15 00:41:11.185672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.057 [2024-05-15 00:41:11.186093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 [2024-05-15 00:41:11.186260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 [2024-05-15 00:41:11.186286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.057 [2024-05-15 00:41:11.186302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.057 [2024-05-15 00:41:11.186528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.057 [2024-05-15 00:41:11.186752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.057 [2024-05-15 00:41:11.186774] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.057 [2024-05-15 00:41:11.186788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.057 [2024-05-15 00:41:11.190131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:45.057 [2024-05-15 00:41:11.195207] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.057 [2024-05-15 00:41:11.199267] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.057 [2024-05-15 00:41:11.199709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 [2024-05-15 00:41:11.199896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 [2024-05-15 00:41:11.199922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.057 [2024-05-15 00:41:11.199946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.057 [2024-05-15 00:41:11.200165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.057 [2024-05-15 00:41:11.200416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.057 [2024-05-15 00:41:11.200437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.057 [2024-05-15 00:41:11.200451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.057 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:45.057 [2024-05-15 00:41:11.203760] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.057 [2024-05-15 00:41:11.212834] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.057 [2024-05-15 00:41:11.213287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 [2024-05-15 00:41:11.213449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.057 [2024-05-15 00:41:11.213474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.057 [2024-05-15 00:41:11.213489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.057 [2024-05-15 00:41:11.213724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.057 [2024-05-15 00:41:11.213966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.057 [2024-05-15 00:41:11.213988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.057 [2024-05-15 00:41:11.214002] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.316 [2024-05-15 00:41:11.217506] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.316 [2024-05-15 00:41:11.226423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.316 [2024-05-15 00:41:11.226859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.316 [2024-05-15 00:41:11.227044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.316 [2024-05-15 00:41:11.227071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.316 [2024-05-15 00:41:11.227087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.316 [2024-05-15 00:41:11.227318] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.316 [2024-05-15 00:41:11.227533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.316 [2024-05-15 00:41:11.227553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.316 [2024-05-15 00:41:11.227567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.316 [2024-05-15 00:41:11.230874] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.316 [2024-05-15 00:41:11.240106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.316 [2024-05-15 00:41:11.240729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.316 [2024-05-15 00:41:11.240902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.316 [2024-05-15 00:41:11.240942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.316 [2024-05-15 00:41:11.240964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.316 [2024-05-15 00:41:11.241190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.316 [2024-05-15 00:41:11.241436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.316 [2024-05-15 00:41:11.241458] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.316 [2024-05-15 00:41:11.241482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.316 Malloc0 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:45.316 [2024-05-15 00:41:11.244817] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:45.316 [2024-05-15 00:41:11.253800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.316 [2024-05-15 00:41:11.254245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.316 [2024-05-15 00:41:11.254451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.316 [2024-05-15 00:41:11.254477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a4990 with addr=10.0.0.2, port=4420 00:25:45.316 [2024-05-15 00:41:11.254493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4990 is same with the state(5) to be set 00:25:45.316 [2024-05-15 00:41:11.254723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a4990 (9): Bad file descriptor 00:25:45.316 [2024-05-15 00:41:11.254963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:45.316 [2024-05-15 00:41:11.254986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:45.316 [2024-05-15 00:41:11.255000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:45.316 [2024-05-15 00:41:11.258341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:45.316 [2024-05-15 00:41:11.263022] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:45.316 [2024-05-15 00:41:11.263297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.316 00:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 989745 00:25:45.316 [2024-05-15 00:41:11.267573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:45.316 [2024-05-15 00:41:11.317213] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:55.278 00:25:55.278 Latency(us) 00:25:55.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.278 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:55.278 Verification LBA range: start 0x0 length 0x4000 00:25:55.278 Nvme1n1 : 15.01 5610.34 21.92 10112.65 0.00 8113.85 1031.59 19709.35 00:25:55.278 =================================================================================================================== 00:25:55.278 Total : 5610.34 21.92 10112.65 0.00 8113.85 1031.59 19709.35 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:55.278 rmmod nvme_tcp 00:25:55.278 rmmod nvme_fabrics 00:25:55.278 rmmod nvme_keyring 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 990512 ']' 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 990512 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' -z 990512 ']' 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # kill -0 990512 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # uname 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:55.278 00:41:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 990512 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 990512' 00:25:55.278 killing process with pid 990512 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # kill 990512 00:25:55.278 [2024-05-15 00:41:20.002455] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@971 -- # wait 990512 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.278 00:41:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.216 00:41:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:56.216 00:25:56.216 real 0m23.133s 00:25:56.216 user 0m55.941s 00:25:56.216 sys 0m6.181s 00:25:56.216 00:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:56.216 00:41:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:56.216 ************************************ 00:25:56.216 END TEST nvmf_bdevperf 00:25:56.216 ************************************ 00:25:56.216 00:41:22 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:56.216 00:41:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:56.216 00:41:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:56.216 00:41:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.474 ************************************ 00:25:56.474 START TEST nvmf_target_disconnect 00:25:56.474 ************************************ 00:25:56.474 00:41:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:56.474 * Looking for test storage... 00:25:56.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.474 00:41:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.474 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:56.474 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.474 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.474 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.474 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.474 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.474 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.474 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:56.475 00:41:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:59.009 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:59.009 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.009 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:59.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:59.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:59.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:25:59.010 00:25:59.010 --- 10.0.0.2 ping statistics --- 00:25:59.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.010 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:25:59.010 00:25:59.010 --- 10.0.0.1 ping statistics --- 00:25:59.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.010 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:59.010 ************************************ 00:25:59.010 START TEST nvmf_target_disconnect_tc1 00:25:59.010 ************************************ 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc1 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:59.010 00:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:59.010 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.010 [2024-05-15 00:41:25.057801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.010 [2024-05-15 00:41:25.058069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.010 [2024-05-15 00:41:25.058098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc06d60 with addr=10.0.0.2, port=4420 00:25:59.010 [2024-05-15 00:41:25.058138] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:59.010 [2024-05-15 00:41:25.058164] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:59.010 [2024-05-15 00:41:25.058177] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:59.010 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:59.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:59.010 Initializing NVMe Controllers 00:25:59.010 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:25:59.010 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:59.010 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:59.010 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:59.010 00:25:59.010 real 0m0.117s 00:25:59.010 user 0m0.049s 00:25:59.010 sys 0m0.067s 00:25:59.010 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:59.010 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:59.010 ************************************ 00:25:59.010 END TEST nvmf_target_disconnect_tc1 00:25:59.010 ************************************ 00:25:59.010 00:41:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:59.010 00:41:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:25:59.010 00:41:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:59.010 00:41:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:59.010 ************************************ 00:25:59.010 START TEST nvmf_target_disconnect_tc2 00:25:59.010 ************************************ 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc2 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=993960 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 993960 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 993960 ']' 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:59.011 00:41:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:59.269 [2024-05-15 00:41:25.176547] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:59.269 [2024-05-15 00:41:25.176632] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.269 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.269 [2024-05-15 00:41:25.256943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:59.269 [2024-05-15 00:41:25.379394] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.269 [2024-05-15 00:41:25.379462] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.269 [2024-05-15 00:41:25.379479] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.269 [2024-05-15 00:41:25.379492] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.269 [2024-05-15 00:41:25.379503] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.269 [2024-05-15 00:41:25.379609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:59.269 [2024-05-15 00:41:25.379709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:59.269 [2024-05-15 00:41:25.379778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:25:59.269 [2024-05-15 00:41:25.379786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:00.201 Malloc0 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:00.201 [2024-05-15 00:41:26.167312] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:00.201 [2024-05-15 00:41:26.195266] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:00.201 [2024-05-15 00:41:26.195552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.201 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:00.202 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:00.202 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:00.202 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:00.202 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:00.202 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=994114 00:26:00.202 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:00.202 00:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:00.202 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.105 00:41:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 993960 00:26:02.105 00:41:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Write completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Write completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Write completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Write completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Write completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Write completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.105 Read completed with error (sct=0, sc=8) 00:26:02.105 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 [2024-05-15 00:41:28.221869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 [2024-05-15 00:41:28.222221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 [2024-05-15 00:41:28.222518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Read completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 Write completed with error (sct=0, sc=8) 00:26:02.106 starting I/O failed 00:26:02.106 [2024-05-15 00:41:28.222819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:02.106 [2024-05-15 00:41:28.223083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.106 [2024-05-15 00:41:28.223286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.106 [2024-05-15 00:41:28.223321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.106 qpair failed and we were unable to recover it. 00:26:02.106 [2024-05-15 00:41:28.223584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.106 [2024-05-15 00:41:28.223818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.106 [2024-05-15 00:41:28.223843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.106 qpair failed and we were unable to recover it. 00:26:02.106 [2024-05-15 00:41:28.224038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.106 [2024-05-15 00:41:28.224203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.106 [2024-05-15 00:41:28.224243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.106 qpair failed and we were unable to recover it. 00:26:02.106 [2024-05-15 00:41:28.224422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.106 [2024-05-15 00:41:28.224643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.224668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.224851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.225024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.225050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.225221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.225386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.225426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.225725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.225987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.226013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.226176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.226351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.226376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.226635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.226796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.226822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.227032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.227223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.227249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.227443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.227642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.227673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.227842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.228046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.228072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.228255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.228466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.228491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.228711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.228943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.228969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.229155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.229311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.229350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.229550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.229724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.229749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.229961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.230127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.230152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.230410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.230603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.230628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.230820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.231014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.231041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.231199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.231357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.231397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.231656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.231823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.231847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.232049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.232242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.232268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.232440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.232608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.232633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.232815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.233044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.233070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.233255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.233461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.233486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.233717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.233944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.233970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.234137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.234321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.234361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.234587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.234772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.234797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.234993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.235166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.235192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.235435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.235589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.235614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.235771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.235957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.235982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.236162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.236337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.236362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.236512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.236705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.107 [2024-05-15 00:41:28.236746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.107 qpair failed and we were unable to recover it. 00:26:02.107 [2024-05-15 00:41:28.237009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.237179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.237204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.237390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.237585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.237610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.237795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.237991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.238017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.238186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.238375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.238400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.238565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.238752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.238777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.238943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.239135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.239162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.239355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.239539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.239564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.239783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.239947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.239973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.240138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.240357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.240383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.240578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.240764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.240789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.240976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.241132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.241158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.241323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.241513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.241538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.241791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.241986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.242012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.242162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.242322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.242347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.242533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.242827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.242852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.243025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.243235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.243275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.243447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.243729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.243755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.243957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.244155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.244180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.244437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.244628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.244653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.244810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.244967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.244993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.245216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.245429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.245454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.245642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.245802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.245842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.246049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.246214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.246241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.246459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.246650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.246676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.246842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.247012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.247039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.247211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.247410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.247435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.247613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.247808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.247834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.248022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.248243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.248269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.248463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.248743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.248769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.108 qpair failed and we were unable to recover it. 00:26:02.108 [2024-05-15 00:41:28.248946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.249132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.108 [2024-05-15 00:41:28.249158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.249353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.249513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.249553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.249758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.249951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.249977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.250193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.250356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.250382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.250574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.250815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.250840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.251028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.251195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.251235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.251396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.251640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.251665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.251852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.252131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.252157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.252360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.252609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.252636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.252830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.252987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.253013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.253201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.253391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.253416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.253605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.253791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.253832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.254030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.254212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.254238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.254399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.254604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.254629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.254832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.255050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.255076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.255262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.255428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.255454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.255687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.255869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.255895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.256081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.256299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.256325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.256485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.256663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.256687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.256922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.257090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.257117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.257287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.257456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.257481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.257693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.257906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.257936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.258171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.258383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.258408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.258589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.258773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.258798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.258959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.259140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.259166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.259322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.259505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.259530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.259714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.259882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.259908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.260119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.260307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.260332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.260498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.260692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.260717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.109 qpair failed and we were unable to recover it. 00:26:02.109 [2024-05-15 00:41:28.260887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.261109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.109 [2024-05-15 00:41:28.261136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.261336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.261558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.261584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.261776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.261952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.261978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.262165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.262333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.262360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.262553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.262769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.262795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.262994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.263153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.263178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.263378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.263573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.263614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.263789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.263981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.264008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.264200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.264394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.264420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.264582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.264769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.264794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.264956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.265154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.265181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.265402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.265557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.265583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.110 qpair failed and we were unable to recover it. 00:26:02.110 [2024-05-15 00:41:28.265761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.110 [2024-05-15 00:41:28.265951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.265977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.266191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.266379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.266405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.266572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.266735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.266760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.266974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.267143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.267168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.267339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.267495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.267520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.267721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.267946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.267972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.268138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.268306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.268330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.268543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.268769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.268794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.268982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.269146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.269172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.269354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.269539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.269565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.269751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.270011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.270036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.270217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.270439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.270464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.270716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.270871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.270913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.271089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.271310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.271335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.271501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.271662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.271701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.381 qpair failed and we were unable to recover it. 00:26:02.381 [2024-05-15 00:41:28.271896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.272072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.381 [2024-05-15 00:41:28.272098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.272322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.272547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.272572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.272731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.272920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.272955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.273186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.273378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.273405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.273650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.273878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.273903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.274104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.274286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.274312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.274472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.274663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.274689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.274873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.275033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.275059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.275229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.275417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.275442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.275631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.275825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.275852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.276044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.276232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.276258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.276470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.276673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.276698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.276868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.277059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.277087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.277292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.277487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.277513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.277699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.277876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.277900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.278084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.278271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.278312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.278467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.278678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.278703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.278860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.279045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.279072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.279237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.279451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.279476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.279643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.279953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.279979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.280186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.280447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.280471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.280676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.280865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.280891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.281090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.281335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.281375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.281603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.281764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.281794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.281984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.282174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.282200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.282364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.282578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.282603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.282967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.283175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.283204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.283417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.283641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.283667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.283900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.284078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.284120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.382 qpair failed and we were unable to recover it. 00:26:02.382 [2024-05-15 00:41:28.284316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.382 [2024-05-15 00:41:28.284476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.284517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.284724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.284943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.284969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.285129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.285351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.285376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.285574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.285736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.285777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.285975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.286198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.286243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.286457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.286684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.286708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.286906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.287097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.287123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.287280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.287439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.287480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.287696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.287921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.287963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.288183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.288406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.288431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.288683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.288888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.288912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.289102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.289302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.289326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.289516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.289723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.289748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.289921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.290111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.290137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.290304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.290459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.290488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.290652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.290814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.290839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.291028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.291205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.291230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.291411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.291585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.291611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.291807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.292040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.292065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.292291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.292473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.292497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.292707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.292916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.292946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.293172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.293354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.293380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.293536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.293727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.293753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.294023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.294245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.294269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.294471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.294741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.294766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.294972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.295187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.295214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.295426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.295658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.295684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.295912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.296150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.296175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.296377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.296540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.383 [2024-05-15 00:41:28.296565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.383 qpair failed and we were unable to recover it. 00:26:02.383 [2024-05-15 00:41:28.296757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.296967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.296992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.297194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.297385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.297425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.297654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.297819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.297845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.298041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.298211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.298237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.298440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.298600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.298641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.298867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.299049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.299076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.299262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.299441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.299466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.299670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.299877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.299901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.300111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.300323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.300349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.300527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.300686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.300710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.300937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.301106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.301131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.301318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.301496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.301521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.301690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.301885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.301911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.302106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.302297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.302323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.302482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.302732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.302772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.302943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.303132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.303157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.303357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.303547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.303586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.303787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.303959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.303985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.304198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.304349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.304375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.304554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.304711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.304738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.304935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.305128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.305155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.305431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.305625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.305652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.305868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.306032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.306058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.306223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.306429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.306454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.306678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.306905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.306937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.307109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.307313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.307338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.307529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.307743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.307769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.307926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.308125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.308150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.308337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.308524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.384 [2024-05-15 00:41:28.308549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.384 qpair failed and we were unable to recover it. 00:26:02.384 [2024-05-15 00:41:28.308720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.308911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.308943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.309109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.309322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.309348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.309579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.309772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.309797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.309997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.310324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.310364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.310621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.310813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.310840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.311098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.311273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.311312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.311521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.311757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.311781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.311984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.312176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.312201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.312380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.312574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.312599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.312851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.313016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.313057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.313246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.313487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.313528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.313737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.313899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.313924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.314129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.314320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.314346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.314510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.314718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.314742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.314951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.315188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.315213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.315400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.315560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.315585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.315773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.315953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.315979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.316214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.316402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.316428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.316611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.316799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.316825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.316984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.317148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.317174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.317357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.317549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.317576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.317742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.317928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.317959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.318231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.318428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.318452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.318650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.318818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.318858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.319068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.319233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.319273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.385 qpair failed and we were unable to recover it. 00:26:02.385 [2024-05-15 00:41:28.319473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.319642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.385 [2024-05-15 00:41:28.319666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.319821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.320031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.320057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.320286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.320485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.320511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.320716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.320988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.321015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.321205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.321399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.321426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.321700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.321860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.321887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.322100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.322340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.322364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.322567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.322755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.322781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.322970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.323154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.323180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.323384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.323595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.323620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.323851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.324015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.324041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.324210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.324426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.324451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.324656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.324878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.324906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.325120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.325327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.325352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.325585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.325778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.325803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.325997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.326167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.326195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.326490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.326666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.326693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.326878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.327045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.327073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.327261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.327449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.327488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.327662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.327831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.327857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.328062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.328254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.328279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.328473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.328666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.328692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.328859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.329053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.329080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.329288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.329470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.329511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.329683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.329871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.329896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.330067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.330263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.330288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.386 [2024-05-15 00:41:28.330481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.330631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.386 [2024-05-15 00:41:28.330656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.386 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.330844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.331003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.331033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.331229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.331407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.331432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.331647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.331846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.331871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.332082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.332288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.332313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.332521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.332712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.332737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.332894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.333074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.333101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.333293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.333553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.333579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.333794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.333997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.334024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.334182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.334398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.334424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.334636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.334826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.334853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.335023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.335184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.335210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.335421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.335609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.335634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.335818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.336007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.336033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.336196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.336430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.336456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.336647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.336834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.336861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.337050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.337276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.337302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.337479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.337663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.337689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.337913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.338087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.338114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.338307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.338583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.338610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.338800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.339059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.339087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.339282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.339469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.339496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.339690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.339907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.339938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.340160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.340367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.340392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.340625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.340810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.340835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.341035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.341201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.341241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.341462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.341696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.341722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.341898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.342103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.342129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.342325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.342482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.342509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.387 qpair failed and we were unable to recover it. 00:26:02.387 [2024-05-15 00:41:28.342680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.342883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.387 [2024-05-15 00:41:28.342910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.343119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.343318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.343344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.343529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.343691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.343731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.343945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.344108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.344134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.344298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.344474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.344499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.344700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.344867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.344906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.345125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.345335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.345360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.345543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.345725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.345755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.345918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.346085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.346125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.346345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.346496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.346522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.346713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.346918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.346969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.347183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.347383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.347408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.347696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.347893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.347918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.348145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.348350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.348374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.348583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.348777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.348802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.349037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.349264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.349289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.349468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.349689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.349714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.349911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.350100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.350131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.350318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.350485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.350510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.350698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.350905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.350936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.351121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.351324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.351350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.351540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.351750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.351774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.352015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.352173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.352200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.352433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.352627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.352653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.352828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.353029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.353056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.353270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.353460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.353486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.353650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.353836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.353863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.354087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.354285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.354315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.354495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.354692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.354718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.388 [2024-05-15 00:41:28.354939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.355098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.388 [2024-05-15 00:41:28.355123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.388 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.355344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.355527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.355552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.355730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.355895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.355922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.356130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.356320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.356346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.356531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.356719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.356745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.356966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.357157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.357183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.357369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.357532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.357557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.357745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.357983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.358009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.358192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.358373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.358402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.358561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.358777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.358802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.358996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.359166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.359191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.359389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.359546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.359572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.359757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.359919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.359949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.360113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.360310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.360336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.360526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.360716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.360741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.360928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.361094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.361121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.361341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.361533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.361559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.361746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.361905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.361951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.362153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.362345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.362371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.362567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.362733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.362759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.362946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.363106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.363133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.363324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.363485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.363512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.363723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.363908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.363944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.364108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.364291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.364330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.364554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.364712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.364739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.364902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.365147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.365173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.365369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.365525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.365551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.365714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.365872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.365899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.366076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.366260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.366285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.389 qpair failed and we were unable to recover it. 00:26:02.389 [2024-05-15 00:41:28.366452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.366606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.389 [2024-05-15 00:41:28.366632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.366799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.367001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.367029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.367218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.367410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.367436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.367594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.367777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.367817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.368017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.368172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.368197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.368390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.368606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.368631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.368825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.369008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.369034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.369245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.369402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.369427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.369576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.369759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.369784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.370000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.370215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.370240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.370405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.370575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.370601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.370822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.370988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.371015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.371208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.371405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.371430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.371622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.371808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.371835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.372097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.372298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.372322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.372525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.372689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.372730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.372939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.373128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.373154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.373418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.373578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.373604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.373802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.374014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.374039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.374205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.374374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.374399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.374591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.374746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.374787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.374975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.375182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.375209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.375396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.375597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.375623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.375812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.375974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.376015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.376217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.376409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.390 [2024-05-15 00:41:28.376435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.390 qpair failed and we were unable to recover it. 00:26:02.390 [2024-05-15 00:41:28.376651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.376811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.376836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.377029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.377188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.377214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.377406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.377576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.377602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.377793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.377956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.377982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.378141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.378370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.378395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.378579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.378839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.378863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.379050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.379278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.379302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.379492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.379704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.379728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.379923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.380126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.380153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.380373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.380575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.380600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.380783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.381013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.381040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.381257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.381438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.381464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.381681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.381882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.381909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.382108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.382295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.382321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.382511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.382668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.382708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.382900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.383121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.383148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.383315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.383525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.383550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.383764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.383923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.383964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.384155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.384383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.384408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.384599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.384821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.384847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.385009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.385192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.385218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.385434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.385584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.385610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.385780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.386025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.386051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.386266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.386431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.386458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.386648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.386832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.386857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.387128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.387339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.387363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.387562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.387753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.387780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.391 qpair failed and we were unable to recover it. 00:26:02.391 [2024-05-15 00:41:28.388021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.388180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.391 [2024-05-15 00:41:28.388206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.388419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.388569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.388595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.388756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.388952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.388980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.389135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.389331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.389357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.389558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.389823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.389848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.390048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.390274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.390299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.390491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.390671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.390696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.390903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.391137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.391163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.391339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.391525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.391551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.391765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.391967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.392008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.392176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.392392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.392418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.392601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.392787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.392826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.393105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.393272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.393298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.393502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.393709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.393734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.393966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.394125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.394151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.394347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.394512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.394537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.394742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.394945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.394972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.395167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.395409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.395435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.395655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.395851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.395877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.396081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.396298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.396323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.396556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.396758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.396785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.396982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.397195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.397221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.397392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.397557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.397582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.397755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.397946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.397973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.398138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.398357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.398382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.398594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.398754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.392 [2024-05-15 00:41:28.398780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.392 qpair failed and we were unable to recover it. 00:26:02.392 [2024-05-15 00:41:28.398940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.399127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.399153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.399324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.399500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.399526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.399701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.399887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.399928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.400136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.400337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.400363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.400524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.400772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.400797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.400961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.401143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.401168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.401391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.401565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.401590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.401830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.402023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.402049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.402232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.402415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.402440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.402630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.402787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.402814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.403028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.403221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.403250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.403413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.403623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.403648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.403826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.404038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.404064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.404255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.404506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.404531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.404710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.404926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.404959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.405122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.405320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.405347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.405522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.405756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.405781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.405983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.406175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.406202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.406394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.406593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.406618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.406814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.407032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.407058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.393 qpair failed and we were unable to recover it. 00:26:02.393 [2024-05-15 00:41:28.407248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.407431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.393 [2024-05-15 00:41:28.407455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.407668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.407850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.407875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.408089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.408259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.408286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.408620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.408918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.408952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.409171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.409373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.409420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.409645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.409859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.409884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.410114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.410324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.410367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.410539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.410729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.410753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.410951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.411143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.411185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.411448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.411739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.411782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.411952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.412241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.412267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.412481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.412718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.412743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.412923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.413125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.413154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.413372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.413573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.413598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.413761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.413949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.413975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.414168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.414352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.414377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.414536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.414770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.414795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.415003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.415198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.415240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.415387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.415630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.415655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.415811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.416029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.416055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.416237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.416430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.416469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.416713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.416938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.416964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.394 qpair failed and we were unable to recover it. 00:26:02.394 [2024-05-15 00:41:28.417181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.394 [2024-05-15 00:41:28.417351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.417382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.417568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.417735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.417761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.417939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.418140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.418166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.418341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.418528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.418554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.418749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.419023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.419050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.419258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.419496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.419520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.419731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.419927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.419957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.420122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.420338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.420364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.420551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.420817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.420843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.421063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.421245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.421270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.421449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.421630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.421659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.421895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.422072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.422100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.422293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.422483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.422509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.422701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.422889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.422914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.423121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.423347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.423373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.423564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.423946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.423972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.424184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.424399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.424441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.424659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.424865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.424890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.425113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.425300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.425326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.425575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.425764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.425788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.425979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.426177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.426207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.395 qpair failed and we were unable to recover it. 00:26:02.395 [2024-05-15 00:41:28.426443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.426619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.395 [2024-05-15 00:41:28.426645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.426864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.427033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.427071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.427232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.427515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.427539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.427770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.427939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.427966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.428128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.428298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.428323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.428487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.428704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.428729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.428941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.429116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.429141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.429344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.429533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.429560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.429737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.429910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.429940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.430154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.430342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.430367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.430560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.430752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.430777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.430995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.431191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.431216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.431408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.431599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.431626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.431805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.432000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.432026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.432189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.432346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.432371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.432555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.432749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.432774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.432961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.433150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.433176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.433357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.433511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.433536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.433688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.433874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.433899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.434066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.434254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.434279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.396 qpair failed and we were unable to recover it. 00:26:02.396 [2024-05-15 00:41:28.434449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.434633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.396 [2024-05-15 00:41:28.434659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.434825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.435017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.435043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.435202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.435427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.435452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.435644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.435832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.435857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.436044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.436210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.436237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.436449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.436642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.436669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.436918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.437109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.437135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.437307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.437506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.437531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.437696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.437880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.437905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.438090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.438317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.438343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.438562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.438728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.438753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.438942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.439132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.439158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.439323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.439513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.439538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.439727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.439953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.439979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.440189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.440353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.440378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.440626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.440835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.440860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.441051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.441219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.441244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.441409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.441596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.441621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.441806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.441996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.442022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.442179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.442367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.442393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.442584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.442774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.442799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.442960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.443118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.443143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.443337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.443491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.443517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.443709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.443900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.443927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.444093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.444272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.444297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.444481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.444637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.444680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.444899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.445063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.445089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.445274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.445461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.445487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.445715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.445881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.445905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.397 [2024-05-15 00:41:28.446128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.446374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.397 [2024-05-15 00:41:28.446399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.397 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.446591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.446810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.446835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.447089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.447293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.447318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.447548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.447736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.447763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.447968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.448221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.448261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.448485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.448647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.448674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.448887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.449058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.449084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.449356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.449515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.449541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.449749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.449939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.449965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.450155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.450361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.450386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.450571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.450785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.450810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.451021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.451185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.451225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.451427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.451589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.451614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.451796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.451989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.452015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.452209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.452368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.452393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.452606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.452793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.452818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.453013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.453165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.453190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.453376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.453592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.453617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.453831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.454015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.454041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.454260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.454504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.454529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.454727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.454905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.454951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.455145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.455352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.455377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.455593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.455809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.455834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.455998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.456191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.456216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.456398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.456588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.456613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.456779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.456976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.457002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.457159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.457356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.457395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.457558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.457784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.457809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.458002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.458193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.458218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.458403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.458564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.458603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.458806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.458996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.459022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.398 [2024-05-15 00:41:28.459235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.459415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.398 [2024-05-15 00:41:28.459444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.398 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.459727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.459974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.460002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.460199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.460427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.460453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.460704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.460861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.460886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.461092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.461284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.461309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.461474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.461629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.461654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.461816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.461984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.462011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.462241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.462430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.462455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.462654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.462857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.462881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.463077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.463265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.463290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.463496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.463694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.463735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.463907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.464106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.464132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.464313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.464508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.464533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.464719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.464995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.465020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.465250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.465439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.465464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.465670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.465864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.465889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.466092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.466281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.466307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.466584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.466782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.466808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.467012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.467208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.467232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.467447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.467704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.467749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.467957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.468153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.468178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.468367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.468559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.468584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.468751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.468918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.468950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.469139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.469342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.469368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.469552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.469712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.469752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.469927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.470135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.470160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.470388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.470574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.470600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.470789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.471027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.471053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.471240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.471532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.471556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.471751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.471945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.471971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.472189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.472363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.472389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.472548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.472740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.472765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.472935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.473129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.473155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.473365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.473526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.473552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.473748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.473940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.473966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.399 qpair failed and we were unable to recover it. 00:26:02.399 [2024-05-15 00:41:28.474159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-05-15 00:41:28.474351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.474376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.474538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.474721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.474747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.474940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.475132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.475156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.475346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.475509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.475535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.475734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.475956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.475982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.476156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.476333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.476359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.476576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.476762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.476787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.476943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.477096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.477121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.477295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.477531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.477556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.477746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.477935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.477961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.478159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.478332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.478357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.478561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.478762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.478787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.478986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.479202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.479227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.479442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.479599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.479624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.479814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.480019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.480045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.480231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.480432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.480461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.480675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.480864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.480888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.481093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.481275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.481302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.481470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.481707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.481732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.481935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.482123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.482148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.482433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.482657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.482682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.482848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.483036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.483062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.483252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.483417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.483443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.483609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.483835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.483860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.484072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.484238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.484263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.484459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.484619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.484648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.484836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.485074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.485100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.485309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.485488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.485513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.400 [2024-05-15 00:41:28.485700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.485884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.400 [2024-05-15 00:41:28.485909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.400 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.486081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.486296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.486321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.486484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.486680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.486706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.486866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.487066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.487091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.487283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.487467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.487492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.487645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.487813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.487840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.488007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.488179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.488204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.488391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.488603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.488632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.488814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.488972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.488998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.489162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.489345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.489370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.489524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.489719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.489744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.489937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.490093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.490119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.490291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.490459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.490485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.490674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.490836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.490861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.491062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.491260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.491286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.491448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.491613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.491640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.491830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.492010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.492037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.492200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.492370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.492396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.492593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.492784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.492809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.493020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.493217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.493242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.493425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.493637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.493661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.493825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.494013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.494038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.494252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.494453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.494477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.494692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.494848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.494875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.495042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.495249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.495274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.495475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.495640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.495664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.495849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.496060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.496086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.496274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.496490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.496514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.496684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.496873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.496898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.497075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.497260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.497285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.497471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.497629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.497654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.497832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.498029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.498054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.498218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.498402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.498426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.498585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.498746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.498785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.401 qpair failed and we were unable to recover it. 00:26:02.401 [2024-05-15 00:41:28.498981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.401 [2024-05-15 00:41:28.499144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.499169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.499373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.499535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.499560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.499757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.499921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.499954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.500120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.500284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.500310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.500506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.500687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.500712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.500872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.501040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.501065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.501252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.501414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.501455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.501681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.501843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.501868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.502097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.502285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.502310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.502503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.502667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.502693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.502855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.503048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.503074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.503313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.503467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.503492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.503679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.503853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.503878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.504084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.504274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.504298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.504585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.504742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.504769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.504993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.505148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.505173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.505403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.505589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.505614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.505804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.505967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.505992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.506187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.506380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.506406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.506591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.506751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.506791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.506990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.507271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.507295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.507507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.507665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.507690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.507907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.508103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.508128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.508289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.508482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.508507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.508730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.508920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.508953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.509185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.509372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.509397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.509586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.509852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.509876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.510072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.510235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.510260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.510449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.510633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.510658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.510876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.511088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.511113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.511331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.511525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.511550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.511732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.511892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.511917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.512131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.512291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.512316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.512502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.512664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.512689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.512919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.513091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.513116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.513304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.513494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.513518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.513672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.513865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.513892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.402 qpair failed and we were unable to recover it. 00:26:02.402 [2024-05-15 00:41:28.514122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.402 [2024-05-15 00:41:28.514311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.514336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.514515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.514665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.514689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.514897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.515113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.515139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.515355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.515520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.515544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.515731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.515919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.515951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.516137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.516360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.516386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.516546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.516741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.516766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.516923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.517115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.517140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.517353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.517520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.517545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.517728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.517916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.517947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.518134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.518346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.518371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.518535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.518694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.518719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.518916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.519121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.519147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.519332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.519520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.519545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.519733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.519924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.519957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.520146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.520337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.520361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.520517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.520682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.520709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.520960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.521180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.521205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.521424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.521604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.521629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.521822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.521993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.522019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.522208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.522388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.522413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.522572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.522757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.522795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.522985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.523145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.523171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.523391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.523595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.523620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.523830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.523993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.524019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.524183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.524365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.524389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.524555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.524740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.524764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.524951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.525147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.525172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.525339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.525541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.525567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.525768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.525990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.526016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.526217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.526400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.526425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.526619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.526776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.526800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.526992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.527161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.527188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.527379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.527546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.527571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.527761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.527978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.528003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.528186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.528383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.528407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.403 [2024-05-15 00:41:28.528565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.528756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.403 [2024-05-15 00:41:28.528783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.403 qpair failed and we were unable to recover it. 00:26:02.404 [2024-05-15 00:41:28.528969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.529125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.529151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.404 qpair failed and we were unable to recover it. 00:26:02.404 [2024-05-15 00:41:28.529343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.529533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.529557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.404 qpair failed and we were unable to recover it. 00:26:02.404 [2024-05-15 00:41:28.529772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.529947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.529972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.404 qpair failed and we were unable to recover it. 00:26:02.404 [2024-05-15 00:41:28.530162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.530376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.530400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.404 qpair failed and we were unable to recover it. 00:26:02.404 [2024-05-15 00:41:28.530589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.530764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.530789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.404 qpair failed and we were unable to recover it. 00:26:02.404 [2024-05-15 00:41:28.530955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.531151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.531176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.404 qpair failed and we were unable to recover it. 00:26:02.404 [2024-05-15 00:41:28.531346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.531509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.531535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.404 qpair failed and we were unable to recover it. 00:26:02.404 [2024-05-15 00:41:28.531697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.531886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.404 [2024-05-15 00:41:28.531911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.404 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.532103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.532260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.532285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.532442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.532641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.532665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.532856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.533055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.533080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.533247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.533407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.533432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.533597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.533788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.533815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.534017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.534179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.534203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.534387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.534598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.534623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.534785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.534957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.534983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.535202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.535399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.535425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.535581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.535785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.535809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.536007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.536167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.536193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.536415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.536580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.536607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.536809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.537019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.537046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.537236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.537410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.537434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.537620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.537805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.537830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.538020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.538208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.538232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.538395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.538662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.538686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.538958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.539128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.539154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.539347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.539534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.539558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.539711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.539900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.539925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.540194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.540361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.540385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.540551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.540766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.540790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.540976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.541168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.541196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.541382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.541541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.541566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.684 [2024-05-15 00:41:28.541769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.541955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.684 [2024-05-15 00:41:28.541981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.684 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.542173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.542364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.542389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.542604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.542797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.542821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.542990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.543186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.543211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.543403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.543590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.543615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.543776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.543952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.543978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.544166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.544444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.544469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.544655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.544846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.544870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.545056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.545241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.545269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.545430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.545597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.545622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.545871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.546052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.546077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.546240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.546428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.546452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.546607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.546760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.546784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.547051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.547242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.547267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.547460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.547644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.547669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.547871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.548030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.548056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.548272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.548464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.548491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.548645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.548840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.548866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.549067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.549263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.549293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.549558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.549718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.549742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.549935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.550120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.550146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.550307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.550464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.550504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.550658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.550912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.550942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.551171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.551327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.551352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.551626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.551786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.551811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.552013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.552174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.552200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.552393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.552583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.552608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.552776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.552952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.552993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.553258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.553451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.553480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.685 qpair failed and we were unable to recover it. 00:26:02.685 [2024-05-15 00:41:28.553665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.685 [2024-05-15 00:41:28.553849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.553875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.554039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.554201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.554226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.554437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.554597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.554623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.554780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.554972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.554997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.555164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.555354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.555380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.555613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.555820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.555845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.556045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.556208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.556249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.556449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.556629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.556654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.556851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.557041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.557067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.557258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.557522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.557548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.557748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.557941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.557967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.558163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.558349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.558373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.558552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.558737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.558762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.558954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.559142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.559167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.559328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.559482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.559507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.559697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.559857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.559882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.560084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.560278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.560301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.560535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.560695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.560720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.560948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.561139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.561164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.561349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.561560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.561584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.561779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.561938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.561963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.562152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.562338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.562363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.562553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.562738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.562762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.562974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.563163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.563188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.563402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.563593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.563619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.563901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.564145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.564186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.564381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.564570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.564594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.564749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.564962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.564988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.565155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.565342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.565367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.686 [2024-05-15 00:41:28.565525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.565798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.686 [2024-05-15 00:41:28.565823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.686 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.566046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.566228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.566253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.566513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.566735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.566760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.566928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.567099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.567140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.567443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.567731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.567756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.567914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.568150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.568176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.568337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.568504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.568528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.568713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.568889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.568914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.569084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.569250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.569277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.569438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.569601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.569625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.569793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.570000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.570027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.570194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.570407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.570432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.570617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.570892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.570937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.571123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.571303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.571328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.571542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.571784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.571809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.571963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.572145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.572170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.572357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.572566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.572591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.572805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.572995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.573021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.573184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.573371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.573397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.573585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.573740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.573765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.573955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.574139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.574164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.574327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.574489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.574529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.574731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.574955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.574980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.575144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.575345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.575369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.575562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.575750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.575775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.575953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.576138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.576162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.576332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.576525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.576549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.576704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.576893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.576918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.577098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.577314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.577339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.577523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.577675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.577700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.687 qpair failed and we were unable to recover it. 00:26:02.687 [2024-05-15 00:41:28.577884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.687 [2024-05-15 00:41:28.578067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.578093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.578291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.578481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.578508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.578677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.578926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.578957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.579176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.579416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.579441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.579708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.579971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.579999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.580217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.580428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.580456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.580662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.580895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.580923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.581158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.581392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.581419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.581618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.581807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.581832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.582038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.582222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.582250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.582468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.582645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.582675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.582890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.583129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.583158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.583460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.583767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.583796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.583987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.584243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.584270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.584506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.584685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.584714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.584914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.585132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.585160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.585393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.585599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.585627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.585835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.586073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.586101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.586289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.586463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.586491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.586694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.586925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.586960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.587161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.587339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.587368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.587607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.587814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.587842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.588050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.588241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.588266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.588481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.588689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.588716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.688 qpair failed and we were unable to recover it. 00:26:02.688 [2024-05-15 00:41:28.588922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.589087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.688 [2024-05-15 00:41:28.589129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.589334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.589541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.589569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.589743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.589953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.589982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.590188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.590440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.590463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.590670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.590908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.590943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.591138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.591521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.591548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.591785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.591985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.592013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.592188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.592403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.592428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.592619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.592807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.592832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.593016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.593189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.593217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.593426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.593631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.593659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.593860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.594089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.594118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.594370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.594629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.594681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.594926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.595145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.595173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.595375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.595605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.595630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.595852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.596064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.596092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.596296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.596509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.596537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.596749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.596940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.596968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.597154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.597386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.597415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.597587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.597768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.597795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.598094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.598456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.598516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.598754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.598926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.598960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.599192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.599377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.599405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.599590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.599823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.599849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.600064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.600227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.600268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.600463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.600688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.600715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.600895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.601082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.601110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.601284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.601496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.601524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.601758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.601957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.601985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.602201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.602441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.602468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.689 qpair failed and we were unable to recover it. 00:26:02.689 [2024-05-15 00:41:28.602696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.689 [2024-05-15 00:41:28.602900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.602940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.603161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.603308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.603333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.603491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.603697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.603725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.603960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.604124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.604149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.604363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.604569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.604596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.604773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.605006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.605034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.605241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.605443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.605468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.605656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.605869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.605898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.606140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.606374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.606402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.606615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.606787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.606815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.607059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.607230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.607255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.607445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.607682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.607737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.607944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.608147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.608175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.608384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.608595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.608623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.608824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.609035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.609063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.609272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.609453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.609480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.609681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.609886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.609914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.610099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.610278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.610311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.610489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.610718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.610746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.610966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.611219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.611271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.611563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.611796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.611821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.612035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.612209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.612237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.612479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.612713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.612740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.612961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.613171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.613199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.613402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.613634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.613659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.613846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.614047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.614076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.614308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.614513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.614541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.614742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.614949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.614982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.615188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.615364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.615394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.690 [2024-05-15 00:41:28.615626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.615804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.690 [2024-05-15 00:41:28.615831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.690 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.616071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.616229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.616255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.616445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.616704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.616732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.616943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.617141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.617166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.617359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.617529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.617553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.617770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.617955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.617990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.618203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.618411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.618440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.618617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.618857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.618884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.619067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.619305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.619337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.619521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.619696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.619723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.619962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.620257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.620307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.620521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.620736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.620764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.620939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.621179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.621204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.621477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.621942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.622003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.622209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.622384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.622412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.622649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.622847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.622888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.623117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.623267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.623292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.623531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.623985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.624015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.624254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.624423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.624454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.624665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.624878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.624906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.625131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.625366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.625394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.625793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.626053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.626093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.626313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.626524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.626551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.626783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.626999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.627027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.627229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.627583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.627612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.627822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.628043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.628071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.628313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.628527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.628555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.628788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.629023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.629051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.629234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.629465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.629490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.629725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.629899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.629927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.691 qpair failed and we were unable to recover it. 00:26:02.691 [2024-05-15 00:41:28.630118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.630270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.691 [2024-05-15 00:41:28.630312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.630510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.630708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.630736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.630942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.631152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.631177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.631369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.631577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.631607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.631847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.632084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.632112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.632312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.632493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.632521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.632729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.632981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.633007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.633241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.633579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.633633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.633860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.634059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.634087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.634302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.634499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.634526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.634731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.634944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.634974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.635228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.635436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.635463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.635639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.635871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.635899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.636117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.636313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.636337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.636546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.636753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.636778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.636956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.637181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.637206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.637419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.637618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.637647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.637864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.638082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.638111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.638357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.638559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.638587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.638800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.639010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.639039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.639247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.639462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.639490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.639722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.639939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.639965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.640181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.640371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.640401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.640632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.640869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.640898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.641134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.641350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.641377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.641618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.641822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.641849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.642061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.642264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.642292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.642501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.642682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.642710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.642924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.643154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.643179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.692 qpair failed and we were unable to recover it. 00:26:02.692 [2024-05-15 00:41:28.643408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.643610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.692 [2024-05-15 00:41:28.643639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.643849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.644029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.644059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.644276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.644491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.644518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.644702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.644899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.644927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.645115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.645321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.645349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.645561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.645737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.645767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.645980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.646188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.646218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.646459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.646645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.646673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.646880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.647097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.647123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.647339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.647546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.647571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.647814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.647986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.648016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.648236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.648492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.648520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.648752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.648924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.648961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.649172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.649407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.649435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.649722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.649965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.649993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.650188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.650423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.650451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.650642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.650839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.650867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.651044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.651254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.651282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.651478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.651681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.651707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.651891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.652147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.652176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.652387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.652562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.652590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.652829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.653017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.653044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.653201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.653409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.653437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.653633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.653903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.653938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.654149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.654392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.654420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.654628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.654845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.654870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.655053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.655265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.693 [2024-05-15 00:41:28.655295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.693 qpair failed and we were unable to recover it. 00:26:02.693 [2024-05-15 00:41:28.655481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.655716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.655744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.655916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.656182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.656210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.656448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.656655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.656683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.656897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.657087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.657116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.657352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.657517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.657542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.657755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.657979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.658004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.658214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.658416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.658445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.658619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.658815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.658844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.659053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.659295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.659323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.659533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.659748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.659773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.659939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.660118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.660146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.660349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.660547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.660574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.660786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.660951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.660976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.661184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.661426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.661458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.661648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.661878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.661904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.662077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.662253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.662281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.662463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.662669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.662698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.662936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.663150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.663176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.663389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.663642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.663693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.663943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.664127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.664152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.664307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.664466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.664508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.664868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.665103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.665129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.665321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.665527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.665555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.665739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.665984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.666009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.666232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.666469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.666493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.666677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.666842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.666870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.667089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.667273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.667302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.667537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.667708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.667736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.667915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.668136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.668161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.694 [2024-05-15 00:41:28.668377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.668740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.694 [2024-05-15 00:41:28.668790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.694 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.669034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.669207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.669233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.669412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.669654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.669679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.669869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.670086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.670117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.670359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.670544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.670573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.670740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.670927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.670957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.671118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.671325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.671353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.671540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.671740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.671767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.672005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.672208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.672236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.672467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.672723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.672775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.672963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.673129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.673154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.673342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.673556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.673584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.673823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.674006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.674033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.674202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.674414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.674442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.674672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.674887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.674915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.675131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.675335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.675405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.675640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.675848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.675876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.676091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.676311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.676336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.676570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.676901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.676960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.677172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.677353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.677380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.677567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.677746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.677774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.677951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.678186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.678214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.678519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.678787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.678816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.678999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.679186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.679243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.679466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.679669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.679697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.679910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.680123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.680152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.680340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.680582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.680607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.680786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.681007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.681036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.681244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.681457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.681482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.681714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.681916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.695 [2024-05-15 00:41:28.681955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.695 qpair failed and we were unable to recover it. 00:26:02.695 [2024-05-15 00:41:28.682169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.682536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.682590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.682824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.683010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.683036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.683246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.683445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.683472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.683692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.683896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.683924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.684140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.684309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.684336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.684544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.684869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.684927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.685176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.685379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.685409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.685653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.685866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.685894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.686078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.686296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.686321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.686510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.686854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.686882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.687074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.687272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.687330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.687559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.687908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.687978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.688160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.688361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.688389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.688601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.688817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.688842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.689045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.689235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.689261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.689458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.689643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.689673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.689858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.690044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.690070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.690259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.690503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.690555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.690762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.690960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.690989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.691168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.691370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.691398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.691572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.691769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.691796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.691983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.692162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.692190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.692398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.692602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.692629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.692842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.693041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.693070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.693310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.693631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.693699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.693901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.694114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.694147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.694354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.694528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.694555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.694809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.694997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.695023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.695185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.695387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.695415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.695631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.695809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.696 [2024-05-15 00:41:28.695837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.696 qpair failed and we were unable to recover it. 00:26:02.696 [2024-05-15 00:41:28.696020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.696230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.696257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.696464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.696693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.696718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.696954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.697184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.697243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.697430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.697686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.697727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.697945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.698161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.698189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.698418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.698758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.698822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.699076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.699273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.699301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.699537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.699749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.699777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.699986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.700170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.700195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.700454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.700661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.700689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.700874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.701091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.701118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.701275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.701507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.701535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.701751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.701940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.701966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.702118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.702285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.702327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.702567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.702833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.702885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.703128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.703291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.703316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.703493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.703694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.703718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.703899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.704095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.704120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.704286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.704546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.704598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.704780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.704992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.705020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.705230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.705426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.705454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.705660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.705865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.705893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.706112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.706264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.706305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.706490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.706710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.706735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.706959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.707173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.707203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.697 [2024-05-15 00:41:28.707445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.707713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.697 [2024-05-15 00:41:28.707764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.697 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.707943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.708127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.708154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.708358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.708527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.708553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.708757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.708967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.708996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.709210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.709419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.709447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.709651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.709832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.709859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.710043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.710206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.710231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.710411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.710624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.710674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.710879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.711097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.711123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.711286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.711499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.711528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.711723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.711928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.711964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.712178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.712427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.712457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.712642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.712874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.712902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.713097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.713426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.713474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.713708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.713921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.713958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.714175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.714362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.714387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.714576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.714799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.714824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.715014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.715225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.715253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.715472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.715688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.715716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.715920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.716113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.716141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.716341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.716549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.716576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.716783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.716993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.717022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.717267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.717432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.717457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.717663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.717902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.717927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.718117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.718297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.718325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.718687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.718950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.718979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.719172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.719386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.719414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.719598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.719802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.719830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.720032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.720235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.720264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.720540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.720781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.720806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.698 [2024-05-15 00:41:28.720971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.721172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.698 [2024-05-15 00:41:28.721200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.698 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.721369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.721585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.721610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.721851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.722194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.722254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.722571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.722770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.722797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.723003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.723193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.723224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.723467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.723678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.723707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.723892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.724080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.724108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.724281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.724483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.724512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.724719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.724885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.724910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.725074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.725285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.725310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.725565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.725816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.725844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.726052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.726300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.726326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.726498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.726691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.726718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.726904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.727162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.727191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.727402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.727613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.727642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.727822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.728053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.728082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.728291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.728503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.728531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.728738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.728956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.728983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.729192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.729436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.729463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.729689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.729893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.729921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.730168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.730409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.730438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.730672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.730890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.730918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.731157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.731348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.731376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.731683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.731938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.731967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.732185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.732392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.732458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.732671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.732877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.732905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.733107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.733281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.733309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.733513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.733687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.733715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.733923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.734168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.734195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.734377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.734550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.734578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.699 qpair failed and we were unable to recover it. 00:26:02.699 [2024-05-15 00:41:28.734796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.699 [2024-05-15 00:41:28.734981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.735011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.735193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.735370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.735398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.735582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.735793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.735828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.736015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.736234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.736262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.736466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.736646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.736674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.736905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.737097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.737125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.737340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.737536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.737561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.737726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.737956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.737981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.738145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.738301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.738326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.738536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.738725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.738753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.738988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.739207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.739236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.739486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.739665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.739693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.739898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.740092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.740124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.740333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.740662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.740719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.740954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.741164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.741197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.741437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.741639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.741664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.741851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.742039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.742068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.742302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.742642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.742707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.742916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.743140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.743173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.743382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.743694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.743753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.743956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.744200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.744227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.744457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.744664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.744691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.744871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.745068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.745096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.745358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.745738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.745788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.746002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.746184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.746212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.746421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.746637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.746697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.746887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.747084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.747111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.747320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.747504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.747530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.747711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.747918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.747961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.748177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.748413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.748438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.700 qpair failed and we were unable to recover it. 00:26:02.700 [2024-05-15 00:41:28.748631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.700 [2024-05-15 00:41:28.748802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.748830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.749066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.749310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.749338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.749570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.749750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.749779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.749994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.750205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.750231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.750385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.750552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.750577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.750794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.750985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.751014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.751247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.751498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.751551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.751795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.751985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.752011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.752179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.752362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.752387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.752634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.752865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.752893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.753116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.753392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.753445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.753626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.753869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.753897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.754115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.754380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.754432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.754670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.754880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.754914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.755134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.755311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.755338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.755555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.755718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.755743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.755927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.756147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.756175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.756378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.756573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.756600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.756811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.756983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.757013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.757252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.757610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.757670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.757857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.758043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.758069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.758285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.758474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.758502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.758714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.758945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.758974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.759154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.759360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.759390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.759604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.759791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.759819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.759997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.760182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.760210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.701 qpair failed and we were unable to recover it. 00:26:02.701 [2024-05-15 00:41:28.760393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.701 [2024-05-15 00:41:28.760571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.760599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.760812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.761017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.761044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.761231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.761445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.761470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.761696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.761871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.761899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.762079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.762308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.762339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.762609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.762834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.762862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.763075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.763248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.763276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.763513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.763765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.763821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.764060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.764289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.764318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.764519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.764710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.764736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.764942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.765127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.765156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.765369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.765573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.765602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.765823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.766012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.766039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.766293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.766591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.766620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.766863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.767043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.767076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.767252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.767533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.767583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.767794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.768004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.768035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.768276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.768486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.768514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.768740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.768956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.768985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.769224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.769441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.769470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.769655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.769837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.769865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.770104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.770311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.770340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.770545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.770734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.770759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.770940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.771118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.771149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.771361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.771577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.771602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.771793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.771962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.771988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.772177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.772379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.772407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.772610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.772794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.772822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.773043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.773229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.773287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.773526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.773728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.702 [2024-05-15 00:41:28.773779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.702 qpair failed and we were unable to recover it. 00:26:02.702 [2024-05-15 00:41:28.773990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.774175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.774203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.774372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.774564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.774590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.774776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.775036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.775069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.775352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.775615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.775667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.775873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.776083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.776113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.776325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.776578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.776627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.776814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.777019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.777048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.777257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.777478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.777530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.777716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.777938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.777973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.778189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.778342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.778367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.778579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.778756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.778784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.779029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.779208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.779237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.779411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.779568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.779609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.779819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.779984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.780028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.780251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.780409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.780451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.780632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.780853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.780878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.781069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.781267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.781292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.781516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.781732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.781760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.781976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.782188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.782217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.782477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.782780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.782828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.783026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.783220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.783246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.783470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.783680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.783708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.783924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.784116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.784145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.784385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.784559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.784586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.784797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.785009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.785041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.785224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.785452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.785499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.785702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.785885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.785915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.786123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.786355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.786383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.786562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.786779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.786808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.786995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.787173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.703 [2024-05-15 00:41:28.787202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.703 qpair failed and we were unable to recover it. 00:26:02.703 [2024-05-15 00:41:28.787376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.787574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.787621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.787831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.788040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.788069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.788281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.788456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.788484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.788667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.788871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.788899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.789116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.789373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.789419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.789662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.789872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.789902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.790131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.790295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.790320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.790499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.790668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.790696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.790907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.791105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.791134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.791371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.791589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.791635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.791845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.792019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.792048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.792258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.792458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.792486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.792726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.792940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.792969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.793149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.793396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.793424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.793664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.793817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.793842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.794052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.794236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.794263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.794475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.794791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.794852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.795032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.795321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.795349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.795549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.795732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.795778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.795985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.796164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.796192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.796392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.796681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.796706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.796912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.797088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.797118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.797334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.797538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.797563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.797744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.797976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.798004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.798207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.798448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.798473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.798717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.798900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.798924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.799102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.799279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.799306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.799483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.799690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.799715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.799879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.800042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.800069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.800276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.800474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.704 [2024-05-15 00:41:28.800509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.704 qpair failed and we were unable to recover it. 00:26:02.704 [2024-05-15 00:41:28.800691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.800883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.800908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.801066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.801318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.801371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.801622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.801826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.801851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.802056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.802268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.802336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.802570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.802822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.802866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.803081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.803260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.803288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.803488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.803695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.803720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.803906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.804125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.804153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.804328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.804499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.804524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.804712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.804883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.804911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.805121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.805307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.805336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.805569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.805794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.805839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.806059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.806249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.806274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.806458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.806777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.806841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.807022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.807230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.807258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.807494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.807741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.807766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.807921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.808140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.808168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.808406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.808590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.808615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.808796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.808979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.809007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.809249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.809433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.809464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.809683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.809890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.809917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.810136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.810464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.810521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.810764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.810966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.810996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.811289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.811546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.811597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.811806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.812018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.812047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.812266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.812452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.812477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.812648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.812847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.812872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.813118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.813308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.813336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.813521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.813707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.813733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.705 qpair failed and we were unable to recover it. 00:26:02.705 [2024-05-15 00:41:28.813916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.705 [2024-05-15 00:41:28.814124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.814149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.814359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.814575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.814600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.814811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.815045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.815074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.815256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.815439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.815499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.815707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.815941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.815970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.816179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.816473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.816527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.816967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.817210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.817236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.817393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.817583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.817608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.817846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.818005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.818032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.818214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.818394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.818419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.818720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.818955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.818984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.819199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.819387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.819421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.819629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.819873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.819901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.820095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.820329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.820358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.820603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.820824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.820849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.821065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.821353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.821402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.821587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.821798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.821827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.822018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.822204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.822229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.822464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.822679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.822706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.822949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.823166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.823195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.823370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.823635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.823687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.823873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.824078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.824112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.824317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.824523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.824551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:02.706 [2024-05-15 00:41:28.824764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.824962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.706 [2024-05-15 00:41:28.824988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:02.706 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.825170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.825367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.825403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.825583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.825755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.825781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.825946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.826114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.826157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.826339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.826546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.826577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.826791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.826977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.827013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.827225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.827531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.827586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.827819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.828009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.828039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.828279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.828602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.828655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.828878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.829181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.829212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.829399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.829731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.829778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.830022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.830198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.830227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.830468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.830763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.830790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.830983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.831203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.831232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.831440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.831660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.831686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.831890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.832089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.832121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.832302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.832505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.832560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.832773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.832960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.832990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.833202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.833385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.833417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.833610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.833818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.833847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.834062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.834248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.834276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.834451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.834668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.834693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.834869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.835112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.835144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.835358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.835569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.835596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.835793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.835957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.000 [2024-05-15 00:41:28.835988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.000 qpair failed and we were unable to recover it. 00:26:03.000 [2024-05-15 00:41:28.836173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.836443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.836499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.836710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.836936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.836965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.837193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.837400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.837430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.837651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.837833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.837862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.838053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.838221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.838267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.838487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.838655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.838685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.838888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.839105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.839137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.839353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.839541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.839570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.839762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.839973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.840002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.840183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.840361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.840389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.840599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.840798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.840826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.841067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.841370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.841431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.841652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.841839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.841867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.842035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.842258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.842284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.842444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.842680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.842713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.842938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.843118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.843144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.843354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.843598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.843650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.843855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.844062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.844091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.844440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.844688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.844715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.844935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.845122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.845150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.845369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.845575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.845605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.845845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.846040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.846066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.846234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.846387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.846412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.846599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.846787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.846816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.847034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.847198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.847225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.847445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.847793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.847839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.848049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.848277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.848333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.848514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.848667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.848692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.001 qpair failed and we were unable to recover it. 00:26:03.001 [2024-05-15 00:41:28.848906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.001 [2024-05-15 00:41:28.849109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.849137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.849374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.849641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.849671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.849892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.850112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.850141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.850331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.850564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.850591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.850840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.851026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.851052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.851225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.851435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.851461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.851675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.851934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.851964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.852177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.852343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.852368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.852602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.852802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.852830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.853069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.853425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.853487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.853722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.853886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.853911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.854137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.854329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.854357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.854564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.854973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.855002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.855240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.855481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.855506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.855686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.855899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.855926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.856129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.856336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.856364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.856580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.856791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.856817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.856987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.857160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.857185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.857399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.857603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.857628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.857790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.858102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.858164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.858397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.858618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.858668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.858847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.859058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.859086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.859353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.859562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.859590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.859799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.860041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.860070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.860265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.860476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.860503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.860719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.860926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.860961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.861170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.861424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.861449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.861637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.861855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.861880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.002 qpair failed and we were unable to recover it. 00:26:03.002 [2024-05-15 00:41:28.862116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.002 [2024-05-15 00:41:28.862322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.862350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.862529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.862873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.862939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.863126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.863484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.863543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.863754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.863944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.863970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.864185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.864378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.864406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.864610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.864830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.864881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.865133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.865339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.865365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.865529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.865750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.865802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.865989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.866340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.866392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.866615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.866821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.866854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.867099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.867296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.867325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.867508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.867690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.867732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.867949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.868110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.868135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.868290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.868449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.868474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.868744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.869004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.869033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.869246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.869484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.869512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.869724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.869893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.869920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.870125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.870354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.870409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.870785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.871047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.871076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.871294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.871518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.871546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.871756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.871974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.872000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.872205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.872414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.872442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.872688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.872897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.872925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.873123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.873419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.873474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.873681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.873874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.873902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.874160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.874332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.874357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.874570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.874835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.874863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.003 qpair failed and we were unable to recover it. 00:26:03.003 [2024-05-15 00:41:28.875056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.003 [2024-05-15 00:41:28.875243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.875268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.875480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.875718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.875743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.875963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.876185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.876214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.876452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.876643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.876668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.876856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.877069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.877098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.877287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.877603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.877663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.877882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.878087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.878116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.878464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.878867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.878921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.879117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.879271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.879315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.879538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.879698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.879741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.879948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.880129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.880156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.880434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.880755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.880837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.881048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.881235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.881263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.881471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.881745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.881795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.882037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.882389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.882442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.882813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.883049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.883078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.883294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.883507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.883563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.883778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.883942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.883985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.884161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.884461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.884526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.884731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.884949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.884974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.885187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.885424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.885452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.885667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.885900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.885928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-05-15 00:41:28.886132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.886397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.004 [2024-05-15 00:41:28.886448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.886821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.887076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.887107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.887302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.887605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.887663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.887878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.888089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.888118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.888304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.888477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.888507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.888679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.888896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.888921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.889091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.889282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.889310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.889515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.889751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.889777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.890025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.890359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.890416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.890798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.891027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.891055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.891269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.891640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.891696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.891938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.892099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.892130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.892341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.892540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.892568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.892750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.892963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.892992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.893203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.893515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.893574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.893808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.894019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.894047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.894260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.894570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.894625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.894844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.895062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.895088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.895269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.895492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.895540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.895756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.895963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.895992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.896228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.896414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.896442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.896740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.897071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.897097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.897305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.897555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.897580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.897814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.897998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.898026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.898231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.898487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.898512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.898760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.899019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.899047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.899254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.899411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.899436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.899602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.899812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.005 [2024-05-15 00:41:28.899839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-05-15 00:41:28.900012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.900310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.900369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.900603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.900815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.900840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.901037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.901226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.901251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.901465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.901676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.901703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.901940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.902173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.902200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.902413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.902621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.902649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.902828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.902997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.903023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.903211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.903426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.903451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.903671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.903874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.903902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.904121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.904380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.904405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.904592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.904798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.904849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.905059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.905247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.905273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.905506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.905907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.905973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.906185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.906392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.906420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.906664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.906870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.906896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.907063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.907264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.907316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.907561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.907875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.907936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.908167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.908495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.908546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.908759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.908985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.909014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.909187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.909397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.909425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.909659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.909890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.909919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.910105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.910323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.910373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.910605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.910887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.910914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.911099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.911307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.911334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.911505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.911736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.911766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.912002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.912214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.912241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.912464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.912742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.912770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.913010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.913194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.913222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.913422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.913652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.913680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.913891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.914092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.006 [2024-05-15 00:41:28.914121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-05-15 00:41:28.914324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.914527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.914555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.914734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.914986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.915012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.915225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.915505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.915533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.915743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.915953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.915982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.916171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.916330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.916373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.916596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.916803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.916830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.917037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.917309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.917335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.917603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.917832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.917860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.918047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.918257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.918286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.918492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.918707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.918735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.918965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.919153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.919182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.919396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.919606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.919634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.919871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.920081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.920110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.920315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.920571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.920596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.920753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.920962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.920991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.921287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.921569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.921620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.921832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.921996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.922022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.922232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.922550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.922607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.922833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.923038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.923067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.923236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.923443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.923473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.923680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.923892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.923920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.924126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.924329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.924357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.924528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.924734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.924762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.924974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.925180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.925208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.925448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.925637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.925662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.925903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.926123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.926152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.926359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.926590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.926645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.926853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.927094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.927120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.927311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.927470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.927495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.007 [2024-05-15 00:41:28.927706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.927969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.007 [2024-05-15 00:41:28.927995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.007 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.928172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.928354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.928382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.928594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.928806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.928831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.929001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.929193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.929219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.929428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.929799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.929854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.930065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.930284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.930309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.930555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.930767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.930794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.930993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.931285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.931334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.931548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.931901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.931960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.932148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.932380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.932408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.932610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.932828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.932853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.933024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.933230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.933258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.933469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.933672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.933701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.933908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.934118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.934146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.934424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.934723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.934775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.934959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.935149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.935175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.935358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.935525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.935554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.935772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.936007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.936035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.936272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.936560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.936610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.936854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.937035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.937064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.937301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.937512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.937539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.937749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.937903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.937944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.938130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.938323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.938348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.938565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.939005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.939033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.939266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.939580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.939638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.939855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.940046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.940072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.940252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.940437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.940464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.940684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.940869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.940894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.941148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.941370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.941395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.941598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.941808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.941833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.008 [2024-05-15 00:41:28.942069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.942296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.008 [2024-05-15 00:41:28.942325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.008 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.942565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.942750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.942778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.942973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.943150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.943178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.943414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.943621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.943649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.943862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.944066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.944095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.944270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.944596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.944643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.944858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.945074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.945100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.945297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.945479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.945504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.945824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.946101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.946129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.946331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.946504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.946531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.946764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.947000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.947039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.947246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.947455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.947484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.947666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.947873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.947901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.948084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.948321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.948349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.948568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.948836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.948888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.949107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.949291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.949319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.949527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.949744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.949769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.949953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.950173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.950202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.950451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.950668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.950696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.950904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.951118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.951146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.951502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.951818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.951846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.952031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.952236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.952264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.952478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.952688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.952713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.952912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.953078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.953106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.953402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.953669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.953697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.009 [2024-05-15 00:41:28.953908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.954080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.009 [2024-05-15 00:41:28.954107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.009 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.954314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.954656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.954707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.954911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.955097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.955130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.955321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.955506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.955531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.955720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.955909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.955946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.956160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.956453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.956514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.956720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.956965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.956991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.957211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.957492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.957520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.957737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.957951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.957980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.958168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.958367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.958395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.958605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.958863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.958890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.959054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.959243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.959268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.959485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.959667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.959694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.959942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.960160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.960190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.960393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.960732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.960795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.961029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.961270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.961298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.961486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.961673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.961698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.961858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.962076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.962102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.962285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.962505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.962533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.962827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.963083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.963142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.963351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.963567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.963595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.963811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.963977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.964019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.964230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.964435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.964463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.964708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.964942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.964971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.965210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.965387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.965417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.965636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.965875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.965903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.966108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.966290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.966318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.966575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.966833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.966858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.967044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.967326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.967382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.967604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.967785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.967812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.010 [2024-05-15 00:41:28.968018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.968357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.010 [2024-05-15 00:41:28.968422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.010 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.968711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.968913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.968949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.969165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.969328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.969353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.969539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.969728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.969753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.969965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.970137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.970165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.970376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.970623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.970648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.970804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.970989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.971015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.971203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.971398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.971425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.971629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.971831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.971859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.972073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.972273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.972301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.972503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.972748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.972794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.972982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.973191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.973219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.973423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.973602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.973634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.973880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.974047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.974073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.974267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.974455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.974482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.974664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.974875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.974903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.975125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.975401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.975429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.975634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.975849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.975875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.976083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.976241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.976266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.976442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.976635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.976668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.976874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.977087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.977117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.977295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.977495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.977525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.977719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.977893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.977921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.978139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.978425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.978486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.978722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.978904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.978943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.979169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.979388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.979419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.979640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.979845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.979873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.980114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.980349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.980378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.980585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.980974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.981004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.981195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.981578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.981628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.011 [2024-05-15 00:41:28.981840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.982027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.011 [2024-05-15 00:41:28.982053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.011 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.982211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.982398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.982467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.982651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.982822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.982864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.983049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.983262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.983291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.983488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.983650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.983691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.983900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.984086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.984116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.984356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.984632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.984681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.984922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.985134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.985162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.985350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.985523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.985548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.985767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.985952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.985982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.986162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.986374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.986400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.986567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.986771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.986799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.987014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.987232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.987261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.987475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.987632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.987658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.987859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.988050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.988078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.988314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.988643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.988672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.988881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.989054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.989080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.989261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.989524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.989580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.989795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.990010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.990036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.990224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.990430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.990455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.990681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.990843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.990869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.991076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.991253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.991284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.991452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.991724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.991776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.992042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.992227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.992255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.992470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.992639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.992665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.992824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.992994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.993023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.993200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.993446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.993498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.993824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.994113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.994142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.994354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.994520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.994548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.994734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.994982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.995011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.012 [2024-05-15 00:41:28.995198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.995384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.012 [2024-05-15 00:41:28.995413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.012 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:28.995625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.995815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.995840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:28.996008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.996226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.996251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:28.996470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.996650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.996679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:28.996865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.997106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.997135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:28.997322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.997612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.997671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:28.997850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.998046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.998078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:28.998262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.998461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.998489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:28.998670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.998878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.998908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:28.999097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.999280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.999310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:28.999495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.999691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:28.999717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:28.999906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.000084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.000110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.000354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.000621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.000671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.000905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.001109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.001138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.001351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.001542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.001571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.001749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.001961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.001991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.002200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.002479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.002529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.002811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.003085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.003112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.003280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.003470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.003495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.003658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.003883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.003912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.004134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.004321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.004350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.004631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.004877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.004905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.005119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.005367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.005395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.005575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.005782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.005813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.006028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.006234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.006262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.006471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.006657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.006682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.013 qpair failed and we were unable to recover it. 00:26:03.013 [2024-05-15 00:41:29.006871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.007079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.013 [2024-05-15 00:41:29.007107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.007321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.007535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.007560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.007736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.007966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.007995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.008191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.008485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.008536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.008749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.008961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.008990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.009239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.009478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.009530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.009748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.009917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.009966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.010149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.010436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.010489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.010700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.010892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.010918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.011113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.011291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.011346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.011557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.011855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.011903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.012149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.012384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.012434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.012644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.012879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.012908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.013127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.013305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.013335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.013522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.013735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.013790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.014003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.014252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.014278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.014471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.014725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.014773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.014962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.015173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.015203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.015384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.015655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.015680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.015873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.016072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.016104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.016319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.016547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.016601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.016789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.017010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.017036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.017219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.017433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.017459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.017651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.017887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.017912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.018076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.018262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.018288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.018504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.018713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.018742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.018918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.019162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.019209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.019466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.019707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.019734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.019916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.020110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.020135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.020342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.020562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.020594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.014 qpair failed and we were unable to recover it. 00:26:03.014 [2024-05-15 00:41:29.020753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.020941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.014 [2024-05-15 00:41:29.020967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.021127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.021350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.021397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.021612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.021800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.021847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.022091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.022267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.022295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.022461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.022710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.022737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.022904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.023118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.023146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.023384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.023586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.023615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.023852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.024045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.024071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.024233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.024421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.024447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.024681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.024881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.024909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.025106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.025316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.025344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.025517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.025700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.025729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.025963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.026174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.026202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.026448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.026622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.026655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.026911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.027137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.027166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.027353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.027586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.027633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.027845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.028033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.028062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.028273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.028453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.028482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.028656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.028808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.028852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.029089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.029292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.029321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.029523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.029757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.029806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.029992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.030187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.030212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.030376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.030581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.030610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.030856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.031100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.031131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.031314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.031489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.031518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.031751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.031988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.032017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.032227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.032411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.032440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.032660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.032829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.032875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.033087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.033313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.033339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.033546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.033801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.033827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.015 qpair failed and we were unable to recover it. 00:26:03.015 [2024-05-15 00:41:29.034000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.034185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.015 [2024-05-15 00:41:29.034215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.034428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.034629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.034659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.034890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.035077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.035106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.035322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.035484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.035527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.035718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.035885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.035942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.036160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.036397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.036424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.036614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.036817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.036845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.037039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.037276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.037305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.037514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.037699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.037726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.037908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.038092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.038121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.038333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.038551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.038596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.038780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.039009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.039054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.039251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.039443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.039468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.039653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.039831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.039856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.040035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.040233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.040277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.040519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.040701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.040746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.040947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.041136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.041161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.041353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.041587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.041615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.041827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.042032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.042061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.042273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.042459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.042484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.042679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.042889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.042922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.043117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.043336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.043362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.043550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.043714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.043741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.043976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.044183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.044212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.044386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.044580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.044605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.044815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.044986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.045015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.045248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.045403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.045428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.045708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.045943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.045971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.046154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.046361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.046388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.046593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.046774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.046803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.016 qpair failed and we were unable to recover it. 00:26:03.016 [2024-05-15 00:41:29.047035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.047285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.016 [2024-05-15 00:41:29.047336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.047608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.047823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.047851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.048059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.048246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.048274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.048478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.048723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.048748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.048915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.049109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.049137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.049320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.049549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.049595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.049811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.050026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.050055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.050271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.050459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.050484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.050721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.050941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.050970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.051157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.051451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.051513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.051729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.051891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.051917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.052160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.052370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.052399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.052602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.052814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.052866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.053083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.053298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.053326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.053541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.053724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.053750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.053939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.054149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.054177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.054390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.054595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.054623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.054854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.055032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.055062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.055269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.055452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.055478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.055678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.055884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.055912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.056134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.056343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.056371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.056599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.056816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.056844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.057032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.057218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.057248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.057446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.057651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.057679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.057921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.058192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.058220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.058473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.058638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.058664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.058862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.059045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.059075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.059250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.059451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.059480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.059688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.059861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.059891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.060081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.060395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.060458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.017 qpair failed and we were unable to recover it. 00:26:03.017 [2024-05-15 00:41:29.060671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.060883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.017 [2024-05-15 00:41:29.060911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.061144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.061377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.061429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.061675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.061861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.061889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.062145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.062349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.062378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.062602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.062836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.062864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.063082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.063241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.063281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.063462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.063698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.063742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.063946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.064122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.064147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.064334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.064542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.064571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.064776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.064962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.064995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.065232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.065516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.065572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.065749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.065950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.065983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.066171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.066446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.066496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.066699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.066910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.066951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.067181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.067376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.067403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.067661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.067883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.067911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.068117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.068357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.068411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.068584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.068793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.068821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.069026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.069194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.069220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.069385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.069553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.069580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.069793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.070030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.070059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.070253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.070465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.070490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.070690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.070879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.070906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.071099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.071283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.071310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.071523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.071707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.071740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.071961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.072181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.018 [2024-05-15 00:41:29.072207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.018 qpair failed and we were unable to recover it. 00:26:03.018 [2024-05-15 00:41:29.072393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.072552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.072579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.072792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.072988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.073020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.073226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.073387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.073413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.073606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.073788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.073817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.074008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.074245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.074293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.074542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.074789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.074815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.074987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.075202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.075232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.075445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.075622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.075652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.075871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.076091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.076120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.076379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.076568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.076597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.076844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.077040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.077079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.077299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.077538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.077586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.077800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.077993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.078020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.078191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.078354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.078382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.078558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.078771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.078804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.079016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.079237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.079263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.079445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.079664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.079690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.079941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.080195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.080226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.080413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.080620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.080648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.080863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.081069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.081098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.081278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.081561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.081611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.081797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.082011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.082040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.082240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.082431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.082456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.082648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.082890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.082918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.083107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.083315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.083344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.083515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.083855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.083905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.084104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.084313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.084350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.084595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.084746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.084771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.084967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.085167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.085193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.085364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.085570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.019 [2024-05-15 00:41:29.085615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.019 qpair failed and we were unable to recover it. 00:26:03.019 [2024-05-15 00:41:29.085835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.086064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.086093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.086336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.086557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.086601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.086800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.087006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.087035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.087273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.087475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.087503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.087715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.087910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.087940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.088167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.088421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.088466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.088641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.088854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.088882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.089110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.089295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.089323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.089505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.089680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.089708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.089884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.090131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.090157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.090345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.090547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.090574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.090757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.090999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.091025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.091248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.091402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.091428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.091617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.091773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.091798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.091961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.092155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.092181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.092391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.092664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.092716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.092934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.093156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.093183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.093423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.093671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.093729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.093966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.094192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.094220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.094546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.095007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.095035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.095244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.095434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.095461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.095643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.095853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.095879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.096099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.096286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.096314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.096497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.096730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.096758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.096981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.097170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.097195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.097414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.097599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.097627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.097831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.098044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.098072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.098335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.098499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.098524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.098737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.098964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.098996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.099197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.099411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.099439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.020 qpair failed and we were unable to recover it. 00:26:03.020 [2024-05-15 00:41:29.099643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.020 [2024-05-15 00:41:29.099821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.099851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.100166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.100614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.100679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.100860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.101099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.101128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.101321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.101591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.101616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.101828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.102041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.102069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.102225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.102394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.102441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.102696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.102905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.102941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.103176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.103449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.103475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.103720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.103943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.103972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.104207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.104420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.104448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.104698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.104882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.104910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.105165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.105405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.105432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.105607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.105843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.105871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.106097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.106302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.106330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.106566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.106831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.106883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.107107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.107288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.107317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.107494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.107697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.107725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.107927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.108156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.108189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.108432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.108663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.108690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.108937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.109160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.109185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.109396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.109579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.109609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.109989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.110261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.110289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.110485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.110696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.110720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.110942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.111160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.111188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.111367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.111587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.111614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.111773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.111987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.112013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.112227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.112439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.112467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.112674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.112879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.112907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.113120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.113323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.113375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.113623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.113803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.113828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.021 qpair failed and we were unable to recover it. 00:26:03.021 [2024-05-15 00:41:29.114040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.021 [2024-05-15 00:41:29.114217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.114246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.114478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.114686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.114710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.114864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.115029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.115056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.115343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.115785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.115835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.116040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.116229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.116254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.116462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.116676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.116745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.116981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.117175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.117200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.117406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.117625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.117669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.117867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.118109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.118138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.118323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.118514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.118539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.118779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.119042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.119070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.119303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.119503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.119531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.119745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.119945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.119974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.120189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.120484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.120539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.120750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.120935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.120964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.121187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.121379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.121404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.121589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.121774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.121799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.122037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.122297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.122326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.122513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.122703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.122731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.122914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.123095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.123122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.123307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.123538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.123565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.123771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.123981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.124012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.124216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.124383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.124411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.124644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.124860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.124888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.125110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.125320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.125349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.125596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.125791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.022 [2024-05-15 00:41:29.125816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.022 qpair failed and we were unable to recover it. 00:26:03.022 [2024-05-15 00:41:29.125975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.126189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.126217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.126390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.126624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.126671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.126856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.127065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.127095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.127335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.127587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.127633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.127827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.128037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.128066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.128406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.128703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.128732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.128938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.129108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.129136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.129314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.129543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.129593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.129776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.130010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.130039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.130240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.130492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.130551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.130768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.130983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.131012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.131240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.131470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.131498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.131736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.131918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.131954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.132169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.132382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.132410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.132602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.132811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.132839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.133018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.133222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.133250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.133457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.133636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.133664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.133902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.134092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.134120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.134302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.134459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.134501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.134705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.134885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.134913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.135130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.135315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.135341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.135554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.135751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.135777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.135968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.136178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.136206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.136452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.136641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.136666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.136829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.137018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.137044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.137325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.137525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.137554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.137756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.137950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.137976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.138161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.138340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.138364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.138560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.138773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.138800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.139005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.139229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.023 [2024-05-15 00:41:29.139255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.023 qpair failed and we were unable to recover it. 00:26:03.023 [2024-05-15 00:41:29.139470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.139632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.139657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.139896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.140112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.140138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.140350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.140616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.140644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.140824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.141035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.141064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.141252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.141523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.141569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.141800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.141983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.142011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.142215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.142584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.142639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.142842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.143039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.143067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.143262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.143494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.143539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.143712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.143916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.143951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.144203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.144468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.144497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.144835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.145091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.145120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.145328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.145518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.145564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.145753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.145973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.145999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.146222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.146402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.146431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.146804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.147040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.147072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.147288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.147483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.147559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.147747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.147946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.147975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.148217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.148390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.024 [2024-05-15 00:41:29.148416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.024 qpair failed and we were unable to recover it. 00:26:03.024 [2024-05-15 00:41:29.148648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.295 [2024-05-15 00:41:29.148876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.295 [2024-05-15 00:41:29.148904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.295 qpair failed and we were unable to recover it. 00:26:03.295 [2024-05-15 00:41:29.149101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.149316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.149380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.149589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.149832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.149858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.150051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.150290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.150315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.150499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.150717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.150750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.150939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.151108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.151137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.151382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.151588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.151617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.151803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.152041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.152070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.152340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.152727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.152787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.152998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.153209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.153237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.153440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.153638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.153666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.153853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.154069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.154095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.154302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.154487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.154515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.154745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.154926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.154963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.155207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.155540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.155602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.155823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.156019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.156044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.156321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.156557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.156586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.156834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.157006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.157033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.157222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.157383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.157408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.157613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.157789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.157817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.158024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.158188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.158213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.158393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.158553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.158578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.158753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.158987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.159015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.159232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.159436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.159464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.159733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.159974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.160002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.160247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.160582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.160634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.160839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.161076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.161102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.161298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.161534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.161585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.161797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.161984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.162010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.162198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.162525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.162569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.296 qpair failed and we were unable to recover it. 00:26:03.296 [2024-05-15 00:41:29.162803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.296 [2024-05-15 00:41:29.163023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.163052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.163236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.163417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.163446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.163675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.163904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.163938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.164156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.164366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.164412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.164651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.164856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.164887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.165089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.165274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.165302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.165635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.165887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.165912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.166101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.166344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.166372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.166554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.166807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.166831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.167030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.167191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.167217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.167407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.167643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.167672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.167875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.168083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.168111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.168331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.168500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.168528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.168748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.168958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.168986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.169167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.169344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.169372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.169592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.169803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.169831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.170064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.170281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.170337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.170519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.170800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.170858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.171044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.171254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.171278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.171476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.171687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.171712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.171950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.172166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.172192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.172476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.172688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.172714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.172923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.173121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.173149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.173386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.173596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.173623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.173840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.174022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.174048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.174293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.174629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.174681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.174905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.175114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.297 [2024-05-15 00:41:29.175142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.297 qpair failed and we were unable to recover it. 00:26:03.297 [2024-05-15 00:41:29.175357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.175568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.175597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.175780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.175996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.176025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.176207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.176506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.176558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.176764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.176973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.177002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.177180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.177376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.177401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.177589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.177766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.177794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.178006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.178197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.178225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.178420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.178589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.178614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.178830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.179039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.179068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.179277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.179491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.179537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.179780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.180004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.180048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.180256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.180487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.180515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.180753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.180987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.181013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.181255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.181481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.181524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.181724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.181905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.181949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.182198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.182515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.182580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.182815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.183034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.183064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.183299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.183535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.183564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.183784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.184015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.184045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.184240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.184557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.184601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.184793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.184987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.185013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.185205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.185420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.185463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.185664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.185842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.185871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.186085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.186314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.186385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.186620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.186870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.186897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.187091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.187296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.187324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.187560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.187815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.187843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.188088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.188298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.188323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.188511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.188714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.298 [2024-05-15 00:41:29.188743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.298 qpair failed and we were unable to recover it. 00:26:03.298 [2024-05-15 00:41:29.188986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.189170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.189198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.189433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.189672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.189733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.189955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.190148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.190176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.190364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.190576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.190604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.190817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.190991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.191021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.191241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.191400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.191441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.191773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.192036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.192065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.192267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.192511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.192572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.192778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.192992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.193023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.193260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.193579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.193630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.193866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.194079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.194108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.194320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.194484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.194509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.194718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.194953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.194982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.195159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.195375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.195403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.195679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.195911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.195944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.196163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.196448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.196498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.196729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.196967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.196996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.197195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.197374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.197403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.197576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.197775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.197803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.198041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.198211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.198236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.198449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.198680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.198713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.198945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.199183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.199222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.199528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.199736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.199764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.199978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.200165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.200190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.200436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.200649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.200674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.200832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.200992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.201035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.201241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.201450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.201475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.201634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.201825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.201850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.299 [2024-05-15 00:41:29.202034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.202266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.299 [2024-05-15 00:41:29.202291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.299 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.202446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.202634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.202659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.202860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.203084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.203114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.203332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.203542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.203571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.203778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.204009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.204038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.204207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.204434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.204481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.204813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.205051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.205079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.205264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.205547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.205600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.205848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.206058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.206087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.206268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.206626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.206685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.206896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.207062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.207088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.207256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.207556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.207612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.207797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.208008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.208037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.208274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.208485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.208510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.208861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.209080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.209108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.209314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.209521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.209549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.209758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.209968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.209997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.210174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.210484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.210542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.210853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.211042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.211067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.211281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.211497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.211526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.211730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.211942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.211971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.212141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.212353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.212381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.212560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.212780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.300 [2024-05-15 00:41:29.212805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.300 qpair failed and we were unable to recover it. 00:26:03.300 [2024-05-15 00:41:29.212964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.213209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.213237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.213475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.213803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.213852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.214069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.214313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.214341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.214694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.214899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.214945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.215127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.215313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.215338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.215525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.215739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.215767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.215979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.216218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.216242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.216492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.216704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.216730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.216946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.217150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.217178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.217352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.217562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.217589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.217820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.218039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.218069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.218237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.218415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.218443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.218678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.218893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.218918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.219119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.219385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.219411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.219568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.219810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.219855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.220089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.220278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.220303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.220517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.220773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.220799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.221039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.221355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.221415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.221627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.221858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.221887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.222087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.222299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.222326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.222507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.222748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.222777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.223023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.223238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.223267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.223450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.223672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.223698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.223893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.224089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.224115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.224305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.224533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.224561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.224742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.224980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.225009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.301 qpair failed and we were unable to recover it. 00:26:03.301 [2024-05-15 00:41:29.225218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.301 [2024-05-15 00:41:29.225456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.225510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.225773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.226026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.226055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.226262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.226514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.226575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.226811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.226994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.227023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.227227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.227559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.227605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.227802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.228044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.228071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.228228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.228596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.228643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.228823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.229028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.229057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.229271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.229598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.229656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.229847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.230058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.230087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.230271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.230441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.230467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.230665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.230891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.230917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.231123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.231338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.231367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.231601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.231829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.231857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.232050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.232267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.232293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.232538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.232698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.232724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.232938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.233123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.233148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.233362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.233574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.233599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.233815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.234003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.234045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.234246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.234397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.234422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.234587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.234791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.234820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.235044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.235249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.235280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.235495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.235691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.235719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.235944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.236163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.236188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.302 [2024-05-15 00:41:29.236431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.236716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.302 [2024-05-15 00:41:29.236748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.302 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.236990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.237214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.237240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.237405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.237572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.237598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.237805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.238015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.238042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.238203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.238367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.238392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.238584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.238765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.238793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.239013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.239174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.239200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.239390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.239569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.239598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.239893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.240106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.240132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.240361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.240579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.240608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.240824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.241048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.241074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.241238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.241428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.241458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.241734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.241944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.241972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.242157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.242346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.242372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.242583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.242788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.242814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.243018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.243183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.243210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.243437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.243616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.243644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.243901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.244072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.244099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.244258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.244420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.244462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.244759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.244983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.245009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.245173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.245327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.245353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.245548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.245742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.245771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.245967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.246133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.246159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.246323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.246492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.246518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.246757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.246923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.246966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.247144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.247300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.303 [2024-05-15 00:41:29.247325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.303 qpair failed and we were unable to recover it. 00:26:03.303 [2024-05-15 00:41:29.247514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.247667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.247695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.247883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.248055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.248081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.248242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.248425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.248450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.248612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.248774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.248800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.248972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.249127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.249153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.249339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.249492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.249517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.249707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.249872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.249898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.250090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.250286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.250312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.250473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.250644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.250673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.250839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.251018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.251044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.251239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.251429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.251455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.251609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.251767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.251792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.251965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.252138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.252164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.252410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.252594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.252620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.252837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.253010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.253036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.253201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.253365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.253390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.253546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.253741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.253766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.253966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.254156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.254182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.254374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.254535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.254562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.254755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.254912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.254955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.255157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.255343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.255369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.255596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.255788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.255815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.256004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.256175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.256201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.256387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.256592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.304 [2024-05-15 00:41:29.256622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.304 qpair failed and we were unable to recover it. 00:26:03.304 [2024-05-15 00:41:29.256840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.257047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.257073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.257291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.257609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.257635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.257793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.257955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.257981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.258146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.258353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.258379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.258537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.258720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.258745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.258944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.259134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.259161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.259361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.259606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.259653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.259868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.260065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.260091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.260304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.260515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.260572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.260845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.261068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.261095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.261294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.261458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.261499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.261719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.261894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.261923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.262128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.262358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.262409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.262635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.262834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.262863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.263081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.263281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.263307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.263495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.263687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.263731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.263973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.264143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.264168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.264381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.264588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.264632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.264813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.265007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.265033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.265195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.265500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.265549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.265756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.266006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.266034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.305 [2024-05-15 00:41:29.266195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.266380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.305 [2024-05-15 00:41:29.266408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.305 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.266588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.266770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.266806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.267026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.267195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.267231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.267433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.267587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.267613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.267799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.268025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.268052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.268215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.268449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.268499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.268864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.269075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.269101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.269310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.269543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.269593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.269793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.270015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.270044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.270228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.270389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.270415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.270648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.270878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.270904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.271111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.271327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.271353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.271545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.271742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.271769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.272012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.272171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.272196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.272388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.272619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.272665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.272911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.273123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.273151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.273326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.273539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.273565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.273732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.273890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.273943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.274130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.274321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.274350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.274561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.274795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.274824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.275047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.275235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.275261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.275458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.275668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.275697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.275918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.276107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.276133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.276334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.276566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.276596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.276803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.277001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.277029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.277244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.277567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.277615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.277856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.278046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.278072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.278290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.278571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.278596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.278753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.278911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.306 [2024-05-15 00:41:29.278945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.306 qpair failed and we were unable to recover it. 00:26:03.306 [2024-05-15 00:41:29.279115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.279323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.279352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.279574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.279793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.279818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.280012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.280238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.280264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.280430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.280694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.280749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.280950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.281136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.281165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.281405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.281641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.281691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.281899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.282126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.282152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.282336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.282534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.282583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.282774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.282993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.283020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.283175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.283364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.283392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.283637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.283800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.283828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.284039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.284204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.284230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.284382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.284565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.284593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.284806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.284988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.285018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.285220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.285379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.285421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.285634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.285818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.285847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.286032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.286293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.286343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.286584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.286765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.286794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.287058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.287252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.287277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.287450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.287668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.287695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.287866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.288055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.288086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.288292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.288498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.288527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.288742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.288955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.288981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.289172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.289342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.289368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.289545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.289710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.289741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.289902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.290093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.290119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.290305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.290498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.290524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.290717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.290938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.290965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.291132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.291354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.291379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.307 [2024-05-15 00:41:29.291572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.291771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.307 [2024-05-15 00:41:29.291797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.307 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.291986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.292175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.292203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.292452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.292688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.292734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.292917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.293134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.293160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.293435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.293669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.293698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.293887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.294088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.294116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.294270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.294480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.294508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.294751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.294915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.294951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.295187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.295404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.295433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.295618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.295850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.295885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.296079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.296273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.296298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.296539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.296778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.296830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.297053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.297238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.297268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.297473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.297734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.297778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.297999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.298164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.298189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.298380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.298564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.298592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.298797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.299013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.299039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.299341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.299544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.299569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.299782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.300033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.300062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.300240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.300437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.300467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.300715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.300962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.300999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.301206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.301401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.301445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.301682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.301898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.301927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.302123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.302319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.302364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.302580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.302786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.302814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.302990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.303211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.303241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.303456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.303648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.303673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.303910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.304107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.304132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.304298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.304463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.304488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.304702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.304889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.304914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.308 [2024-05-15 00:41:29.305114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.305313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.308 [2024-05-15 00:41:29.305341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.308 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.305559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.305778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.305803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.306015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.306198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.306226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.306411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.306641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.306686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.306924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.307196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.307221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.307410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.307661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.307707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.307926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.308105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.308147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.308355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.308575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.308623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.308885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.309081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.309110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.309313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.309523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.309567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.309814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.310050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.310076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.310245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.310414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.310449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.310655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.310882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.310910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.311127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.311311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.311335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.311539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.311749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.311774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.311955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.312148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.312176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.312374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.312605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.312650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.312857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.313029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.313055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.313265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.313496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.313521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.313732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.313944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.313973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.314182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.314407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.314453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.314637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.314817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.314846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.315051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.315275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.315300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.315529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.315786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.315814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.315989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.316205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.316234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.316456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.316720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.316765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.316974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.317167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.317195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.317399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.317579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.317607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.317836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.318037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.318066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.318295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.318475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.318520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.309 qpair failed and we were unable to recover it. 00:26:03.309 [2024-05-15 00:41:29.318752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.318944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.309 [2024-05-15 00:41:29.318973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.319191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.319425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.319470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.319711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.319914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.319948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.320161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.320333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.320378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.320634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.320789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.320814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.321004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.321164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.321189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.321379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.321653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.321678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.321863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.322079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.322108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.322312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.322544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.322571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.322792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.323002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.323030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.323271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.323567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.323592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.323748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.323968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.323997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.324190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.324424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.324452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.324701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.324861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.324886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.325091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.325319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.325345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.325530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.325745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.325789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.326001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.326224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.326256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.326436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.326624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.326668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.326902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.327116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.327145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.327384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.327605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.327630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.327870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.328077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.328106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.328349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.328513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.328538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.328739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.328955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.328984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.329192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.329373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.329416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.329623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.329839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.329864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.330095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.330265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.310 [2024-05-15 00:41:29.330305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.310 qpair failed and we were unable to recover it. 00:26:03.310 [2024-05-15 00:41:29.330561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.330811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.330836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.331022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.331190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.331215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.331371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.331562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.331587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.331803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.332012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.332040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.332224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.332430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.332458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.332665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.332829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.332854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.333036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.333215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.333243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.333444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.333622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.333649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.333845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.334052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.334077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.334292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.334446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.334471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.334682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.334902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.334945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.335182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.335334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.335359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.335567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.335803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.335828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.336056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.336350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.336378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.336586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.336790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.336818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.337028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.337244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.337269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.337443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.337719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.337763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.337973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.338178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.338203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.338418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.338594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.338622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.338823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.339029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.339057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.339291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.339492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.339536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.339752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.339912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.339944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.340162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.340340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.340385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.340593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.340767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.340795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.341006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.341245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.341290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.341543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.341752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.341780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.341988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.342244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.342268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.342440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.342600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.342625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.342807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.343018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.343047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.343262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.343450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.311 [2024-05-15 00:41:29.343493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.311 qpair failed and we were unable to recover it. 00:26:03.311 [2024-05-15 00:41:29.343725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.343963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.343992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.344227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.344415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.344464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.344758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.345061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.345090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.345325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.345487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.345512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.345724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.345960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.345986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.346148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.346342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.346367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.346542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.346777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.346802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.347015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.347179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.347203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.347391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.347601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.347626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.347804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.348033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.348084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.348288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.348458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.348485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.348694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.348942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.348971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.349147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.349357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.349382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.349609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.349815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.349840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.349999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.350207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.350235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.350467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.350705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.350733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.350912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.351167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.351192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.351405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.351645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.351670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.351826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.352031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.352060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.352268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.352552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.352603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.352810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.353045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.353074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.353311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.353488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.353513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.353697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.353878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.353905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.354104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.354343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.354387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.354625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.354861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.354888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.355103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.355279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.355308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.355490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.355699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.355724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.355916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.356115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.356141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.356366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.356569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.356596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.312 [2024-05-15 00:41:29.356778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.356958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.312 [2024-05-15 00:41:29.356992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.312 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.357233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.357459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.357487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.357660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.357811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.357853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.358077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.358400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.358458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.358702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.358954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.358990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.359294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.359526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.359554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.359757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.359966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.359995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.360200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.360506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.360563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.360777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.361005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.361032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.361195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.361412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.361437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.361623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.361843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.361871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.362087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.362310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.362357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.362587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.362769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.362794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.362969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.363381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.363427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.363643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.363805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.363848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.364092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.364304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.364332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.364543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.364730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.364755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.364939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.365154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.365179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.365451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.365657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.365685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.365889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.366080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.366111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.366351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.366578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.366622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.366826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.367033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.367062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.367279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.367434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.367459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.367648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.367882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.367915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.368111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.368350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.368374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.368610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.368869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.368894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.369089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.369249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.369274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.369493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.369665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.369695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.369893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.370135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.370161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.370438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.370645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.370673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.313 [2024-05-15 00:41:29.370859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.371048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.313 [2024-05-15 00:41:29.371075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.313 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.371285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.371601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.371646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.371940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.372118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.372145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.372412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.372604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.372629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.372858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.373115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.373140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.373384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.373589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.373614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.373773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.373995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.374025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.374196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.374404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.374432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.374609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.374820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.374847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.375087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.375279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.375324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.375555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.375735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.375763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.375985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.376159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.376187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.376402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.376634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.376661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.376841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.377077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.377102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.377261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.377452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.377477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.377710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.377915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.377953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.378137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.378305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.378330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.378577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.378809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.378854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.379029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.379289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.379333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.379573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.379786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.379811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.380029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.380292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.380344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.380553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.380782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.380807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.381021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.381240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.381268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.381637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.381838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.381864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.382078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.382323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.382351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.382567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.382780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.382824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.314 qpair failed and we were unable to recover it. 00:26:03.314 [2024-05-15 00:41:29.383009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.314 [2024-05-15 00:41:29.383229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.383253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.383501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.383838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.383891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.384108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.384323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.384348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.384531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.384733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.384761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.384994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.385223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.385251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.385586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.386004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.386033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.386267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.386477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.386505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.386719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.386925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.386963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.387151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.387375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.387405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.387612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.387811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.387855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.388061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.388246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.388274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.388514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.388702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.388727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.388920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.389157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.389185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.389365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.389630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.389658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.389835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.390018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.390046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.390256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.390485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.390529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.390737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.390938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.390967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.391244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.391405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.391430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.391614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.391796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.391825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.392014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.392222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.392250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.392462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.392644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.392668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.392890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.393109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.393135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.393294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.393537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.393565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.393815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.394036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.394062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.394290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.394573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.394601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.394879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.395074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.395100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.395270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.395434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.395460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.395679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.395858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.395885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.396104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.396341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.396387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.396789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.397028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.315 [2024-05-15 00:41:29.397057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.315 qpair failed and we were unable to recover it. 00:26:03.315 [2024-05-15 00:41:29.397271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.397443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.397470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.397688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.397898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.397924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.398145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.398349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.398377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.398753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.399063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.399091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.399305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.399490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.399515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.399713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.399901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.399926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.400100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.400282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.400311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.400701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.400979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.401005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.401196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.401364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.401389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.401604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.401822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.401867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.402088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.402258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.402283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.402469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.402699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.402745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.402949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.403127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.403154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.403389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.403593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.403619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.403853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.404059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.404088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.404261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.404467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.404494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.404673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.404841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.404868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.405099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.405294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.405338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.405557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.405733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.405761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.405987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.406170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.406195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.406389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.406552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.406577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.406779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.407019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.407065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.407248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.407455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.407480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.407696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.407894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.407924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.408167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.408435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.408460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.408699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.408941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.408969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.409190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.409402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.409430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.409662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.409831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.409861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.410083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.410330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.410391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.316 [2024-05-15 00:41:29.410627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.410829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.316 [2024-05-15 00:41:29.410858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.316 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.411072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.411296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.411324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.411502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.411735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.411762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.412037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.412312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.412340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.412551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.412765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.412809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.413013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.413219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.413248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.413610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.413876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.413904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.414127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.414354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.414379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.414603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.414819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.414844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.415067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.415284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.415310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.415500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.415683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.415708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.415891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.416114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.416142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.416352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.416537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.416562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.416716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.416953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.416982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.417236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.417498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.417549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.417777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.418023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.418051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.418268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.418463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.418491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.418730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.418971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.419000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.419277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.419517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.419545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.419779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.419965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.419993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.420224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.420580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.420631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.420839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.421048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.421073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.421386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.421693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.421720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.421917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.422117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.422142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.422361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.422568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.422597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.422807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.422985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.423014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.423263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.423685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.423730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.423938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.424150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.424178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.424384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.424658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.424708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.424891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.425078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.317 [2024-05-15 00:41:29.425106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.317 qpair failed and we were unable to recover it. 00:26:03.317 [2024-05-15 00:41:29.425355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.425715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.425778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.425996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.426167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.426192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.426388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.426596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.426623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.426811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.427030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.427057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.427361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.427610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.427649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.427854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.428095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.428123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.428295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.428467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.428494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.428662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.428838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.428868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.429059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.429259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.429322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.429531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.429759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.429787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.430000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.430218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.430243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.430499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.430878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.430947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.431185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.431522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.431575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.431780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.432010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.432038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.432226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.432434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.432462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.432669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.432823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.432848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.433048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.433269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.433300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.433514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.433725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.433753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.433955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.434130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.434159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.434359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.434564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.434592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.434924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.435160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.435188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.435402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.435593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.435625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.435821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.436055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.436086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.436310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.436639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.436699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.436914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.437135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.437165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.437379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.437594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.437622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.437828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.438040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.438069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.438271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.438430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.438456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.438675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.438882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.438910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.318 qpair failed and we were unable to recover it. 00:26:03.318 [2024-05-15 00:41:29.439159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.439427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.318 [2024-05-15 00:41:29.439454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.439731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.439963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.439989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.440140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.440294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.440319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.440557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.440739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.440767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.440977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.441187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.441215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.441389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.441617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.441642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.441828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.442046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.442075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.442254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.442501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.442558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.442774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.442982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.443011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.443189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.443428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.443453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.443634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.443821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.443852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.444037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.444312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.444363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.444549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.444709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.444735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.444896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.445097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.445125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.445326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.445546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.445572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.319 [2024-05-15 00:41:29.445856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.446091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.319 [2024-05-15 00:41:29.446118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.319 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.446285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.446527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.446556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.446727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.446949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.446976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.447213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.447423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.447452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.447691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.447878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.447904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.448081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.448248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.448274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.448454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.448665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.448690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.448857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.449028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.449056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.449257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.449469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.449498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.449709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.449876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.449901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.450129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.450351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.450380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.450557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.450840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.450896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.451148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.451360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.451388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.451566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.451778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.451807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.452005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.452169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.452210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.452450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.452633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.452662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.452897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.453085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.453113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.453349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.453621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.453673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.590 qpair failed and we were unable to recover it. 00:26:03.590 [2024-05-15 00:41:29.453889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.590 [2024-05-15 00:41:29.454120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.454149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.454345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.454535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.454563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.454774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.454976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.455005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.455196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.455360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.455388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.455626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.455831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.455859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.456044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.456238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.456263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.456449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.456656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.456684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.456890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.457107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.457136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.457375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.457789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.457841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.458054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.458247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.458273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.458481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.458687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.458720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.458937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.459156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.459185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.459396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.459584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.459610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.459799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.460018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.460044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.460285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.460498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.460523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.460707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.460920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.460956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.461147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.461307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.461348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.461588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.461838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.461865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.462068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.462271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.462300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.462543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.462799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.462849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.463025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.463456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.463507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.463696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.463916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.463952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.464145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.464385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.464413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.464643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.464816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.464841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.465020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.465263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.465326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.465523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.465688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.465732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.465905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.466143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.466169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.466356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.466549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.466574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.466808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.466969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.466995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.591 qpair failed and we were unable to recover it. 00:26:03.591 [2024-05-15 00:41:29.467172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.467424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.591 [2024-05-15 00:41:29.467452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.467755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.468051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.468079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.468364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.468699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.468760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.468974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.469152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.469181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.469411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.469592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.469621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.469860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.470055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.470084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.470273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.470439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.470464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.470679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.470859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.470888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.471111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.471385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.471410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.471594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.471788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.471814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.472007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.472227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.472256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.472460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.472758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.472787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.472981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.473383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.473438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.473840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.474084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.474113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.474290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.474543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.474594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.474805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.475017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.475046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.475229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.475425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.475454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.475798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.476080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.476108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.476323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.476508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.476537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.476742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.476949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.476984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.477179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.477422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.477473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.477819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.478061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.478087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.478251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.478419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.478448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.478640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.478847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.478875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.479082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.479293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.479320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.479479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.479639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.479666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.479965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.480165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.480194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.480396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.480576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.480606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.480796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.480999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.481028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.481263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.481577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.592 [2024-05-15 00:41:29.481631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.592 qpair failed and we were unable to recover it. 00:26:03.592 [2024-05-15 00:41:29.481813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.481998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.482027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.482211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.482426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.482451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.482639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.482818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.482848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.483142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.483476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.483544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.483753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.483956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.483986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.484180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.484404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.484458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.484664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.484877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.484906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.485128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.485345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.485370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.485525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.485782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.485835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.486040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.486256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.486282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.486493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.486688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.486749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.486940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.487146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.487175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.487361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.487526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.487551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.487723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.487915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.487954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.488118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.488321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.488350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.488602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.488886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.488913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.489158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.489320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.489346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.489557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.489736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.489764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.489944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.490145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.490173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.490509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.490871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.490941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.491132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.491347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.491372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.491527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.491794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.491843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.492052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.492346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.492399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.492733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.492999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.493028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.493263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.493473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.493503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.493750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.493941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.493967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.494139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.494329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.494357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.494585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.494800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.494828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.495045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.495229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.495255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.593 qpair failed and we were unable to recover it. 00:26:03.593 [2024-05-15 00:41:29.495461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.593 [2024-05-15 00:41:29.495641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.495666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.495877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.496087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.496116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.496465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.496876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.496939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.497168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.497379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.497407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.497618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.497831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.497860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.498067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.498269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.498298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.498471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.498777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.498838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.499082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.499322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.499371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.499602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.499809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.499834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.500016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.500205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.500236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.500544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.500918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.500979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.501188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.501394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.501422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.501657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.501861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.501889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.502081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.502353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.502400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.502678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.502910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.502953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.503161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.503387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.503415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.503629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.503862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.503919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.504119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.504294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.504322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.504504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.504701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.504729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.504943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.505118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.505147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.505347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.505520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.505549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.505767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.505921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.505953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.506160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.506370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.506398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.506611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.506769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.506794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.594 qpair failed and we were unable to recover it. 00:26:03.594 [2024-05-15 00:41:29.506975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.507147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.594 [2024-05-15 00:41:29.507175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.507467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.507862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.507920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.508162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.508445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.508470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.508655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.508854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.508882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.509075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.509250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.509278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.509492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.509705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.509733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.509972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.510258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.510286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.510559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.510824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.510849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.511070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.511398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.511446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.511682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.511892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.511920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.512143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.512391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.512419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.512653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.512890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.512918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.513171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.513431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.513459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.513693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.513903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.513939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.514184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.514366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.514391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.514549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.514779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.514829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.515080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.515240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.515265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.515423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.515603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.515629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.515808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.515959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.515988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.516158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.516371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.516398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.516608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.516842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.516870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.517082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.517308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.517355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.517561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.517801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.517829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.518028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.518253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.518281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.518494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.518805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.518855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.519070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.519242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.519270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.519525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.519803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.519831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.520050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.520287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.520337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.520525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.520810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.520863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.521082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.521297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.595 [2024-05-15 00:41:29.521322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.595 qpair failed and we were unable to recover it. 00:26:03.595 [2024-05-15 00:41:29.521594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.521813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.521841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.522055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.522226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.522251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.522460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.522662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.522690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.522897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.523098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.523127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.523351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.523629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.523681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.523888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.524153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.524183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.524390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.524721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.524770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.524961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.525142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.525170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.525406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.525588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.525616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.525797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.525989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.526015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.526225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.526490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.526520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.526732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.526937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.526970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.527214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.527403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.527428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.527585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.527771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.527799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.527983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.528270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.528322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.528533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.528723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.528783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.529074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.529322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.529347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.529504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.529725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.529777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.529987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.530172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.530200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.530383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.530565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.530591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.530779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.531037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.531067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.531313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.531675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.531726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.531964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.532128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.532154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.532355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.532570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.532595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.532916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.533190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.533218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.533431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.533613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.533641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.533851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.534100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.534129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.534314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.534532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.534560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.596 qpair failed and we were unable to recover it. 00:26:03.596 [2024-05-15 00:41:29.534926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.535188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.596 [2024-05-15 00:41:29.535216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.535436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.535673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.535701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.535913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.536151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.536179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.536383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.536594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.536619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.536836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.537055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.537083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.537297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.537509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.537537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.537773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.537992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.538023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.538231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.538469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.538493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.538707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.538922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.538953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.539170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.539410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.539439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.539675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.539865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.539891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.540160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.540355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.540380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.540675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.540917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.540951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.541164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.541375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.541400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.541608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.541799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.541829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.542088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.542343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.542396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.542680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.542851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.542879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.543098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.543302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.543330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.543541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.543696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.543737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.543962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.544126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.544152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.544345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.544536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.544565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.544794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.544972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.545001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.545217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.545572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.545621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.545829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.546018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.546055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.546393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.546639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.546673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.546908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.547124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.547153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.547331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.547562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.547590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.547799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.548006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.548035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.548214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.548421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.548487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.597 [2024-05-15 00:41:29.548681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.548892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.597 [2024-05-15 00:41:29.548917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.597 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.549112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.549306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.549331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.549533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.549744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.549772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.549953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.550163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.550202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.550431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.550680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.550738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.550917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.551101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.551130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.551351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.551534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.551562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.551747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.551946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.551984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.552197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.552490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.552544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.552752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.552957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.552996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.553232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.553446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.553474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.553777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.554033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.554059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.554278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.554491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.554520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.554733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.554897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.554923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.555099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.555373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.555399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.555582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.555793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.555821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.556034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.556223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.556248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.556436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.556654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.556679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.556893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.557123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.557152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.557388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.557713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.557769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.558007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.558244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.558272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.558480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.558717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.558769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.559014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.559204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.559230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.559434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.559616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.559644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.559878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.560095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.560124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.560339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.560538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.560566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.560803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.561019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.561052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.561303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.561511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.561541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.561754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.561970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.561996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.598 qpair failed and we were unable to recover it. 00:26:03.598 [2024-05-15 00:41:29.562185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.562378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.598 [2024-05-15 00:41:29.562404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.562595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.562788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.562813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.563004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.563175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.563201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.563391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.563579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.563605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.563795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.563983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.564009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.564205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.564370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.564397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.564560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.564749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.564775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.564946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.565166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.565193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.565355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.565541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.565566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.565779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.565975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.566001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.566186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.566346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.566371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.566563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.566753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.599 [2024-05-15 00:41:29.566779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.599 qpair failed and we were unable to recover it. 00:26:03.599 [2024-05-15 00:41:29.566999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.567183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.567209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.567400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.567585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.567611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.567848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.568028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.568054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.568236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.568393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.568419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.568582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.568768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.568793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.569009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.569207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.569234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.569435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.569658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.569683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.569864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.570023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.570050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.570266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.570452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.570479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.570642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.570830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.570856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.571033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.571224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.571250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.571434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.571625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.571651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.571815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.572035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.572061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.572246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.572427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.572452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.572632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.572821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.572846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.573009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.573176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.573203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.573359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.573548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.573574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.573732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.573914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.573948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.574100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.574256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.574282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.574473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.574685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.574711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.574894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.575116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.575142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.575334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.575487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.575513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.575668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.575879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.575905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.576109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.576297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.576322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.600 qpair failed and we were unable to recover it. 00:26:03.600 [2024-05-15 00:41:29.576485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.600 [2024-05-15 00:41:29.576674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.576700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.576895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.577088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.577115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.577305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.577517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.577543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.577706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.577867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.577894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.578067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.578252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.578292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.578509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.578687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.578711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.578909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.579075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.579101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.579289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.579487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.579512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.579706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.579943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.579970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.580190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.580390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.580415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.580610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.580825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.580849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.581054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.581357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.581404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.581617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.581826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.581855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.582064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.582237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.582262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.582484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.582667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.582692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.582940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.583155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.583184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.583384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.583581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.583604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.583816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.584031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.584058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.584250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.584535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.584561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.584818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.585067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.585094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.585289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.585497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.585526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.585729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.585900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.585942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.586180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.586524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.586570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.586816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.587000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.587027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.587242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.587589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.587633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.587837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.588068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.588097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.588304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.588544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.588584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.588787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.588980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.589007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.601 qpair failed and we were unable to recover it. 00:26:03.601 [2024-05-15 00:41:29.589226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.589528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.601 [2024-05-15 00:41:29.589578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.589821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.590030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.590059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.590267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.590518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.590543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.590760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.590951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.590980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.591150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.591331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.591359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.591594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.591842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.591882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.592094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.592309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.592335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.592535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.592724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.592750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.592938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.593182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.593207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.593413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.593674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.593699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.593903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.594127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.594153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.594327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.594491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.594532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.594741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.594947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.594984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.595205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.595488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.595542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.595780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.596071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.596123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.596435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.596627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.596652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.596846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.597037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.597063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.597287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.597575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.597627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.597841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.598038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.598067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.598281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.598473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.598497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.598674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.598889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.598915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.599125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.599333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.599359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.599548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.599732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.599756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.599973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.600183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.600211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.600387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.600661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.600709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.600944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.601134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.602 [2024-05-15 00:41:29.601162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.602 qpair failed and we were unable to recover it. 00:26:03.602 [2024-05-15 00:41:29.601504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.601886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.601937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.602127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.602320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.602346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.602504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.602692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.602717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.602938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.603157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.603185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.603456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.603797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.603846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.604051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.604234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.604262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.604472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.604697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.604752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.604959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.605278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.605330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.605550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.605742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.605784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.605996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.606181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.606206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.606438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.606626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.606652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.606808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.607026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.607056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.607343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.607784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.607838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.608053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.608216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.608241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.608412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.608662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.608690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.608891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.609083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.609109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.609273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.609490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.609516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.609675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.609861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.609886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.610077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.610330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.610381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.610599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.610786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.610811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.611004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.611264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.611316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.611560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.611790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.611818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.612003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.612232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.612256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.612435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.612623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.612649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.612837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.613053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.613079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.613242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.613428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.613454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.613665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.613900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.613937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.614153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.614389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.614417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.614700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.614891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.614916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.603 qpair failed and we were unable to recover it. 00:26:03.603 [2024-05-15 00:41:29.615176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.603 [2024-05-15 00:41:29.615362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.615386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.615608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.615797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.615822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.616044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.616297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.616355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.616695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.616962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.616992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.617198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.617431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.617482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.617669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.617855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.617897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.618117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.618516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.618571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.618819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.619023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.619053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.619255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.619479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.619505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.619744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.619920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.619974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.620182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.620370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.620395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.620586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.620793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.620821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.621065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.621255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.621279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.621465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.621653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.621695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.621940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.622182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.622210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.622581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.622787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.622812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.623003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.623216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.623258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.623445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.623635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.623662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.623857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.624080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.624107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.624312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.624527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.624552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.624743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.624937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.624964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.625145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.625327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.625352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.625564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.625751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.625776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.625995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.626208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.626237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.604 qpair failed and we were unable to recover it. 00:26:03.604 [2024-05-15 00:41:29.626444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.604 [2024-05-15 00:41:29.626653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.626695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.626902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.627096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.627121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.627332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.627483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.627508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.627802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.628037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.628066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.628264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.628818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.628850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.629082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.629303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.629329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.629541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.629756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.629781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.629947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.630138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.630165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.630355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.630553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.630578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.630808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.631039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.631065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.631272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.631461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.631486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.631675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.631858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.631884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.632088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.632283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.632308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.632471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.632629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.632654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.632868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.633037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.633065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.633231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.633395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.633421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.633580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.633767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.633792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.633983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.634152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.634179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.634431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.634605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.634630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.634838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.635009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.635034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.635250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.635423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.635447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.635670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.635867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.635891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.636086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.636290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.636315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.636515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.636683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.636707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.636988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.637183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.637208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.637375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.637575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.637600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.637800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.637991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.638027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.638263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.638532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.638588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.638790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.639050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.639086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.639297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.639519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.605 [2024-05-15 00:41:29.639552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.605 qpair failed and we were unable to recover it. 00:26:03.605 [2024-05-15 00:41:29.639767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.639961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.640006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.640280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.640523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.640558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.640812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.641073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.641110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.641382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.641611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.641651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.641859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.642125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.642162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.642371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.642674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.642708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.642947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.643192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.643225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.643575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.643773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.643813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.644052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.644278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.644330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.644616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.644831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.644870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.645167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.645393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.645430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.645659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.645865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.645897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.646096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.646309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.646345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.646616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.646885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.646922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.647162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.647445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.647483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.647739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.647999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.648037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.648270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.648511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.648566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.648878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.649122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.649159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.649488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.649765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.649797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.650020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.650242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.650283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.650522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.650721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.650754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.651099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.651364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.651397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.651619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.651817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.651851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.652061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.652337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.652391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.652649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.652871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.652908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.653145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.653415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.653471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.653711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.653902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.653957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.654189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.654378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.654410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.654620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.654838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.654870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.606 qpair failed and we were unable to recover it. 00:26:03.606 [2024-05-15 00:41:29.655105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.606 [2024-05-15 00:41:29.655384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.655415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.655740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.656013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.656052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.656315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.656539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.656571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.656795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.656972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.657007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.657239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.657443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.657475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.657712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.657935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.657984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.658239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.658462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.658506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.658717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.658963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.658996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.659244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.659429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.659460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.659717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.660006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.660044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.660285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.660512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.660568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.660776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.660999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.661037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.661256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.661569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.661601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.661900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.662138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.662174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.662376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.662602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.662639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.662910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.663172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.663205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.663440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.663631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.663668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.663880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.664086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.664119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.664359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.664559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.664592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.664810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.665018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.665052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.665343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.665571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.665607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.665829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.666027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.666070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.666314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.666522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.666555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.666794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.666990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.667022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.667241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ad0b0 is same with the state(5) to be set 00:26:03.607 [2024-05-15 00:41:29.667605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.667800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.667846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.668081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.668272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.668298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.668515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.668756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.668798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.668979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.669198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.669224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.607 qpair failed and we were unable to recover it. 00:26:03.607 [2024-05-15 00:41:29.669433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.607 [2024-05-15 00:41:29.669664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.669706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.669864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.670031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.670068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.670454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.670653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.670695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.670886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.671057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.671083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.671300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.671522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.671570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.671782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.672003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.672029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.672223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.672467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.672494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.672705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.672883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.672910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.673147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.673402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.673444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.673639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.673846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.673871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.674094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.674298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.674341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.674522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.674697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.674722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.674912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.675124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.675179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.675427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.675645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.675671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.675883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.676094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.676138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.676330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.676586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.676628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.676833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.677010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.677054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.677251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.677508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.677550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.677764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.677959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.677996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.678227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.678459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.678487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.678730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.678978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.679008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.679208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.679440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.679468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.679671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.679849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.679874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.680029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.680220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.680264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.680580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.680812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.680837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.681024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.681300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.681342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.681548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.681766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.681791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.681998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.682227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.682254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.682494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.682712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.682737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.608 qpair failed and we were unable to recover it. 00:26:03.608 [2024-05-15 00:41:29.682963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.608 [2024-05-15 00:41:29.683184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.683227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.683427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.683653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.683696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.683884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.684079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.684124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.684346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.684599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.684642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.684846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.685083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.685126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.685436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.685656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.685701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.685868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.686170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.686196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.686405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.686659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.686702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.686865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.687051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.687078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.687278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.687495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.687543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.687733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.687897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.687922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.688157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.688390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.688433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.688644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.688882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.688907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.689103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.689354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.689396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.689610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.689785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.689811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.690019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.690214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.690255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.690463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.690713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.690754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.690941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.691156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.691197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.691430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.691625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.691667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.691831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.692050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.692082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.692274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.692461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.692486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.692672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.692835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.692860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.693053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.693268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.693293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.693484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.693655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.693681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.693866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.694055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.694081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.609 qpair failed and we were unable to recover it. 00:26:03.609 [2024-05-15 00:41:29.694273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.609 [2024-05-15 00:41:29.694431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.694456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.694689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.694878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.694904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.695095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.695257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.695283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.695473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.695656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.695681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.695833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.696020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.696051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.696241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.696426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.696452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.696640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.696825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.696851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.697034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.697219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.697245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.697432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.697585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.697611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.697822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.698027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.698054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.698208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.698409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.698434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.698633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.698792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.698817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.699032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.699221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.699247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.699464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.699626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.699651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.699835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.699999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.700031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.700189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.700409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.700435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.700655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.700844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.700869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.701056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.701341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.701366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.701550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.701709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.701751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.701954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.702174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.702200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.702400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.702561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.702585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.702736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.702963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.702989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.703156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.703349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.703378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.703651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.703844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.703871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.704067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.704261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.704287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.704476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.704635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.704661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.704826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.705010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.705036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.705257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.705471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.705496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.705678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.705904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.705935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.610 qpair failed and we were unable to recover it. 00:26:03.610 [2024-05-15 00:41:29.706140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.610 [2024-05-15 00:41:29.706365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.706390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.706555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.706745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.706772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.706960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.707118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.707145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.707339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.707518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.707544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.707717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.707912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.707946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.708171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.708332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.708357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.708549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.708799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.708823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.708990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.709177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.709202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.709359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.709546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.709571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.709768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.709960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.709986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.710174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.710352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.710377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.710538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.710756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.710782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.710947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.711133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.711158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.711370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.711551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.711577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.711731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.711945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.711971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.712173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.712455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.712481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.712701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.712861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.712886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.713064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.713255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.713280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.713463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.713632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.713659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.713841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.714034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.714062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.714224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.714423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.714449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.714641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.714894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.714918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.715137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.715352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.715377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.715597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.715807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.715832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.715994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.716181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.716206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.716364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.716534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.716559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.716758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.716954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.716980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.717163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.717351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.717377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.717586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.717796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.717822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.611 qpair failed and we were unable to recover it. 00:26:03.611 [2024-05-15 00:41:29.717985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.611 [2024-05-15 00:41:29.718177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.718202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.718376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.718655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.718680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.718877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.719038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.719065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.719256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.719446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.719472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.719744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.719896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.719920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.720127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.720318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.720358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.720565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.720816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.720841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.721010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.721438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.721463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.721695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.721867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.721893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.722104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.722322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.722348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.722532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.722700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.722726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.722941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.723185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.723211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.723368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.723569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.723594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.723758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.723915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.723945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.724141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.724324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.724349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.724540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.724698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.724723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.724911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.725078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.725104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.725282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.725452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.725479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.725669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.725894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.725920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.726099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.726295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.726321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.726486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.726657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.726682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.726870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.727036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.727063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.727252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.727443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.727469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.727658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.727857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.727882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.728076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.728266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.728291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.728451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.728667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.728693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.728877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.729035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.729061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.729250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.729455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.729483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.729646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.729855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.729881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.612 qpair failed and we were unable to recover it. 00:26:03.612 [2024-05-15 00:41:29.730113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.612 [2024-05-15 00:41:29.730279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.730304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.730490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.730649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.730674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.730867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.731029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.731055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.731257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.731449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.731475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.731639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.731856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.731882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.732100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.732293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.732321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.732492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.732655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.732681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.732879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.733045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.733071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.733235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.733404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.733430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.733599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.733783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.733808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.734000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.734165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.734191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.734346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.734534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.734559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.734790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.734960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.734987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.735181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.735339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.735364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.735550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.735766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.735792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.735992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.736152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.736178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.736347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.736506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.736531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.736697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.736875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.736903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.737117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.737286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.737316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.737478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.737639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.737665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.737825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.738014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.738039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.738207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.738387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.738413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.738576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.738773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.738800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.738965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.739158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.739183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.739402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.739561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.739586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.739750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.739946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.739973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.613 qpair failed and we were unable to recover it. 00:26:03.613 [2024-05-15 00:41:29.740139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.613 [2024-05-15 00:41:29.740334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.740360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.740549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.740740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.740766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.740949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.741118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.741144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.741376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.741536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.741561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.741750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.741954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.741980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.742144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.742318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.742345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.742529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.742716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.742743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.742938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.743128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.743154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.743377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.743544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.743569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.743782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.743987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.744013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.744223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.744380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.744405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.744596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.744761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.744787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.745008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.745199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.745225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.745420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.745608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.745634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.745800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.745999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.746024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.746189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.746378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.746403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.746589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.746754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.746780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.746971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.747169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.747195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.747376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.747545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.747570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.747732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.747898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.747925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.748122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.748310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.748335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.748489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.748679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.748704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.748881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.749069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.749095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.749262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.749430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.749460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.749653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.749840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.749866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.750029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.750213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.750239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.750435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.750597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.750624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.750780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.751001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.751028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.751214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.751403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.751431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.751594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.751754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.751783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.751954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.752122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.752148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.752339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.752534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.752561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.752737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.752900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.752926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.753122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.753313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.753339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.900 qpair failed and we were unable to recover it. 00:26:03.900 [2024-05-15 00:41:29.753510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.900 [2024-05-15 00:41:29.753664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.753690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.753857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.754046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.754073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.754266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.754453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.754480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.754642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.754833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.754858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.755027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.755192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.755220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.755415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.755582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.755608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.755795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.755994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.756021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.756217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.756379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.756405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.756594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.756753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.756778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.756974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.757137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.757167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.757334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.757542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.757568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.757733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.757922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.757955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.758169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.758384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.758410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.758603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.758767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.758792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.758984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.759151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.759177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.759341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.759555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.759581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.759776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.759993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.760020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.760172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.760335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.760362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.760553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.760748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.760775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.760971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.761160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.761185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.761405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.761570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.761596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.761783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.761953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.761979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.762135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.762346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.762371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.762561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.762752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.762778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.762938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.763097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.763123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.763285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.763452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.763480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.763641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.763828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.763853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.764034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.764194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.764220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.764407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.764594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.764619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.764819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.765006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.765032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.765232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.765389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.765416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.765581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.765735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.765761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.765920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.766091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.766117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.766308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.766492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.766517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.766680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.766846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.766873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.767039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.767205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.767230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.767419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.767606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.767631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.767855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.768022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.768049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.768211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.768434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.768460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.768629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.768832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.768857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.769006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.769177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.769203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.769392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.769583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.769608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.769799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.769965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.769994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.770149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.770341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.770366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.770519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.770707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.770732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.770899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.771080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.771106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.771276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.771449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.771478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.771647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.771864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.771890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.772093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.772258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.772284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.772440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.772626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.772652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.772865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.773048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.773079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.773272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.773456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.773483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.901 qpair failed and we were unable to recover it. 00:26:03.901 [2024-05-15 00:41:29.773682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.901 [2024-05-15 00:41:29.773897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.773924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.774110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.774304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.774329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.774497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.774661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.774687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.774857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.775041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.775067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.775285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.775450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.775474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.775644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.775835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.775861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.776066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.776232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.776257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.776416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.776578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.776606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.776769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.776938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.776965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.777138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.777301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.777327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.777549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.777770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.777796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.777990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.778156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.778183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.778399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.778586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.778613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.778803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.779003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.779040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.779207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.779410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.779437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.779628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.779794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.779818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.780008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.780174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.780202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.780364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.780565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.780591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.780760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.780916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.780961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.781133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.781318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.781343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.781512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.781672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.781697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.781865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.782032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.782058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.782294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.782512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.782537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.782697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.782860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.782886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.783080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.783247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.783271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.783428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.783644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.783670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.783829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.783994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.784020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.784213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.784433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.784459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.784651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.784805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.784830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.785027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.785200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.785227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.785417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.785581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.785606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.785770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.785962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.785988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.786179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.786338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.786363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.786524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.786687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.786713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.786913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.787109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.787135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.787298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.787457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.787482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.787680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.787844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.787870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.788065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.788230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.788256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.788454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.788627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.788652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.788838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.789029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.789055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.789267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.789458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.789486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.789655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.789846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.789872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.790035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.790252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.790277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.790463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.790625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.790650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.790838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.791024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.791051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.791209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.791397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.791422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.791611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.791779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.791805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.792004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.792192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.792218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.792407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.792570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.792595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.792762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.793008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.793039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.902 [2024-05-15 00:41:29.793206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.793418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.902 [2024-05-15 00:41:29.793443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.902 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.793632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.793794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.793820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.794016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.794169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.794195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.794412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.794615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.794641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.794801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.794960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.794986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.795179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.795332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.795360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.795522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.795709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.795735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.795941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.796124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.796149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.796312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.796477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.796503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.796668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.796827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.796852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.797048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.797243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.797269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.797433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.797618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.797644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.797813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.797976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.798003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.798191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.798354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.798380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.798562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.798723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.798748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.798921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.799091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.799116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.799273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.799434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.799459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.799644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.799798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.799824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.799994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.800161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.800188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.800351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.800537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.800563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.800729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.800897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.800924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.801096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.801286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.801312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.801479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.801696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.801722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.801922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.802099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.802124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.802341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.802497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.802523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.802702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.802890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.802916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.803142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.803345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.803371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.803532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.803695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.803721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.803918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.804078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.804103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.804267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.804448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.804473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.804686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.804885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.804910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.805147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.805343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.805368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.805524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.805720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.805745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.805906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.806137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.806162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.806333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.806542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.806567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.806756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.806923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.806955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.807140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.807345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.807369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.807557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.807772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.807797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.807983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.808144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.808169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.808369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.808584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.808609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.808763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.808956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.808982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.809169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.809392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.809418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.809606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.809767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.809792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.810018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.810209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.810235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.810391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.810576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.810603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.810807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.810995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.811021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.811212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.811373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.811398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.811589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.811775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.811800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.811991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.812156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.812181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.812333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.812530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.812555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.812739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.812941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.812972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.813137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.813326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.813352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.813511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.813663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.813688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.813854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.814013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.814039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.814231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.814399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.814424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.814616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.814807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.814832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.815053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.815238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.815263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.815463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.815629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.815655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.903 qpair failed and we were unable to recover it. 00:26:03.903 [2024-05-15 00:41:29.815844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.903 [2024-05-15 00:41:29.816007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.816034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.816246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.816404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.816429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.816611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.816804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.816829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.816991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.817256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.817281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.817546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.817738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.817764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.817952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.818115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.818140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.818347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.818506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.818546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.818734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.818942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.818967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.819181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.819369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.819394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.819579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.819789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.819813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.820007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.820272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.820312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.820510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.820704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.820729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.820919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.821099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.821126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.821317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.821512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.821537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.821701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.821889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.821914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.822087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.822300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.822325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.822511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.822726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.822751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.822917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.823108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.823134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.823325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.823514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.823538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.823719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.823983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.824009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.824200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.824355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.824380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.824565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.824756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.824782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.824965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.825126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.825151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.825381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.825558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.825583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.825797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.825987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.826013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.826229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.826400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.826425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.826580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.826762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.826788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.826992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.827158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.827183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.827373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.827590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.827615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.827786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.827982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.828007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.828166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.828364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.828389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.828575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.828766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.828791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.828959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.829113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.829138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.829308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.829497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.829527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.829714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.829900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.829925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.830093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.830294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.830319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.830546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.830706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.830731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.830917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.831103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.831128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.831319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.831478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.831503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.831704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.831868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.831893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.832084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.832272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.832297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.832485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.832644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.832670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.832881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.833043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.833069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.833255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.833447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.833474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.833640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.833829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.833854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.834016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.834208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.834233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.834432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.834655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.834680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.834843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.835027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.835052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.835242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.835458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.835483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.835670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.835858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.835885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.836084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.836275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.836300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.836492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.836654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.836679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.836855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.837042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.837068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.837260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.837451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.837476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.837664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.837831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.837856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.838071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.838259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.838284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.838473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.838660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.838685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.904 qpair failed and we were unable to recover it. 00:26:03.904 [2024-05-15 00:41:29.838908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.904 [2024-05-15 00:41:29.839106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.839132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.839310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.839502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.839527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.839685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.839888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.839913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.840115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.840335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.840360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.840550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.840733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.840758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.840956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.841140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.841165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.841359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.841553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.841578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.841773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.841956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.841981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.842173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.842333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.842359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.842576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.842776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.842801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.842986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.843161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.843186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.843367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.843581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.843606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.843795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.843958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.843984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.844144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.844327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.844353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.844618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.844828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.844853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.845013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.845203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.845228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.845439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.845622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.845646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.845912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.846090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.846115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.846305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.846517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.846542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.846705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.846898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.846923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.847086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.847278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.847303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.847519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.847676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.847701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.847860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.848060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.848086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.848253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.848518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.848543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.848731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.848916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.848958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.849144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.849355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.849380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.849569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.849754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.849779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.849981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.850167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.850196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.850357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.850548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.850573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.850785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.850978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.851004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.851200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.851353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.851378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.851567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.851751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.851776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.851975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.852162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.852187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.852379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.852565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.852590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.852771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.852936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.852961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.853139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.853356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.853381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.853573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.853784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.853810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.854024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.854178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.854203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.854400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.854614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.854639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.854825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.855024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.855050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.855202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.855388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.855413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.855602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.855781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.855805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.855960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.856125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.856149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.856346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.856500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.856525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.856709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.856868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.856894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.905 qpair failed and we were unable to recover it. 00:26:03.905 [2024-05-15 00:41:29.857100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.857281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.905 [2024-05-15 00:41:29.857306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.857496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.857683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.857708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.857900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.858065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.858090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.858281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.858469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.858496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.858690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.858886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.858911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.859084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.859246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.859271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.859484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.859677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.859702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.859879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.860072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.860097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.860285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.860513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.860537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.860754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.860945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.860970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.861136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.861325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.861350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.861507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.861713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.861738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.861921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.862125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.862150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.862313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.862507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.862532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.862714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.862887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.862911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.863131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.863298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.863323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.863510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.863699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.863725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.863886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.864117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.864143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.864330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.864557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.864581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.864799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.864963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.864989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.865181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.865339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.865363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.865604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.865801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.865826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.866074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.866266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.866292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.866457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.866636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.866662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.866855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.867055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.867080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.867270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.867422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.867447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.867606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.867788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.867813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.868003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.868170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.868197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.868387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.868571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.868611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.868785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.868969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.868995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.869190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.869371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.869396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.869584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.869804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.869829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.869995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.870155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.870180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.870370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.870559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.870588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.870811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.870999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.871024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.871189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.871409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.871434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.871624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.871835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.871860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.872048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.872271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.872296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.872483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.872668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.872693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.872882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.873071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.873097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.873255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.873442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.873466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.873651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.873843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.873868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.874058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.874211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.874237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.874430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.874597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.874624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.874821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.875015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.875041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.875229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.875419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.875444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.875658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.875845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.875870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.876029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.876220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.876245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.876403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.876601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.876640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.876833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.876995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.877021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.877213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.877377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.877406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.877619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.877805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.877830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.878016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.878232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.878256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.878468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.878647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.878672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.878862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.879045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.879071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.879269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.879423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.879448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.879635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.879819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.879844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.906 [2024-05-15 00:41:29.880022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.880212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.906 [2024-05-15 00:41:29.880237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.906 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.880427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.880636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.880661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.880879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.881035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.881062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.881257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.881444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.881468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.881726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.881937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.881961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.882161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.882344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.882369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.882558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.882744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.882769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.883057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.883286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.883311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.883534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.883700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.883725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.883902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.884060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.884086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.884265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.884489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.884514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.884702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.884884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.884909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.885106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.885283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.885312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.885506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.885751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.885776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.885964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.886152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.886178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.886397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.886587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.886612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.886833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.886987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.887013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.887174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.887358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.887383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.887548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.887710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.887735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.887939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.888126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.888152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.888319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.888534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.888560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.888722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.888911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.888943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.889164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.889314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.889340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.889530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.889710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.889735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.889928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.890121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.890146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.890337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.890538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.890565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.890727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.890889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.890917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.891112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.891299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.891329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.891520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.891685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.891711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.891935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.892132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.892157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.892370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.892525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.892550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.892713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.892894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.892920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.893129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.893319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.893345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.893536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.893752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.893779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.893942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.894131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.894156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.894319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.894511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.894537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.894721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.894905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.894938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.895126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.895303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.895333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.895498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.895660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.895685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.895901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.896080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.896107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.896270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.896436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.896461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.896652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.896869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.896895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.897069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.897261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.897287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.897477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.897672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.897697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.897865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.898023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.898050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.898203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.898389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.898416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.898633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.898816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.898842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.907 qpair failed and we were unable to recover it. 00:26:03.907 [2024-05-15 00:41:29.899107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.907 [2024-05-15 00:41:29.899367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.899411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.899614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.899800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.899825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.900040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.900226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.900253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.900512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.900712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.900737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.900937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.901118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.901144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.901304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.901488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.901514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.901704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.901920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.901953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.902143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.902321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.902346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.902531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.902709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.902733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.902903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.903101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.903127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.903318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.903533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.903562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.903741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.903900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.903949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.904140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.904333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.904358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.904543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.904738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.904763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.904981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.905148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.905174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.905334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.905528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.905554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.905768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.905927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.905958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.906143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.906304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.906329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.906519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.906774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.906814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.907019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.907213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.907238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.907454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.907640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.907666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.907933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.908098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.908123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.908340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.908550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.908575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.908754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.908907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.908945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.909132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.909321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.909346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.909537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.909755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.909781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.909946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.910141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.910168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.910355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.910568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.910594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.910755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.910919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.910950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.911132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.911332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.911358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.911546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.911750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.911774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.911984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.912147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.912172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.912336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.912527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.912567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.912762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.912950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.912976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.913166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.913352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.913378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.913641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.913848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.913874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.914076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.914242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.914267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.914439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.914629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.914654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.914873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.915065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.915091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.915287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.915447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.915474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.915667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.915881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.915906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.916080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.916291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.916317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.916531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.916693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.916718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.916906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.917075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.917115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.917308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.917473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.917498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.917664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.917853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.917878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.918035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.918198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.918225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.918387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.918577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.918604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.918811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.919005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.919031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.919215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.919402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.919427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.919685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.919909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.919940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.920114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.920308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.920333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.920523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.920686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.920711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.920877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.921091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.921117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.921280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.921434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.921459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.921623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.921834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.921859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.922027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.922288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.908 [2024-05-15 00:41:29.922314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.908 qpair failed and we were unable to recover it. 00:26:03.908 [2024-05-15 00:41:29.922505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.922698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.922723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.922886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.923046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.923072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.923264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.923454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.923479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.923671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.923828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.923854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.924020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.924208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.924233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.924428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.924614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.924639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.924881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.925040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.925066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.925261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.925452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.925477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.925794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.926021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.926048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.926263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.926441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.926465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.926665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.926842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.926868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.927037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.927194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.927219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.927447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.927638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.927664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.927871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.928052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.928077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.928268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.928426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.928451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.928615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.928812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.928838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.929005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.929222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.929247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.929447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.929632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.929657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.929842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.930053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.930079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.930269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.930461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.930486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.930699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.930908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.930938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.931125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.931313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.931339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.931525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.931687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.931728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.931961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.932147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.932174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.932360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.932568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.932595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.932764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.932962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.932990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.933183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.933396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.933421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.933617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.933784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.933808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.933998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.934188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.934213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.934431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.934624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.934650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.934842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.935025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.935050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.935211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.935375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.935416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.935649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.935881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.935906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.936124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.936337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.936362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.936597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.936801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.936830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.937074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.937276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.937301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.937495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.937697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.937721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.937941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.938097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.938122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.938305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.938492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.938517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.938697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.938884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.938909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.939101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.939259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.939284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.939470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.939684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.939709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.939895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.940087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.940113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.940309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.940493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.940518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.940731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.940938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.940964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.941164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.941320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.941344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.941530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.941688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.941715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.941901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.942062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.942087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.942248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.942468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.942492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.942689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.942881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.942907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.943103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.943294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.943320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.943482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.943659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.943684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.943889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.944109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.944135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.944299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.944517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.944542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.944732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.944917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.944953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.909 [2024-05-15 00:41:29.945157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.945358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.909 [2024-05-15 00:41:29.945382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.909 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.945602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.945787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.945814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.946016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.946207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.946232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.946392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.946553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.946578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.946765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.946978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.947004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.947227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.947418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.947443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.947652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.947840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.947865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.948016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.948179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.948204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.948419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.948580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.948607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.948818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.949016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.949042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.949242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.949423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.949448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.949635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.949787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.949813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.949982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.950187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.950213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.950403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.950586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.950611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.950775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.950945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.950971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.951166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.951365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.951391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.951592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.951777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.951802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.951989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.952144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.952169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.952357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.952556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.952581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.952746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.952939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.952964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.953169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.953353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.953377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.953563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.953721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.953746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.953941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.954131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.954156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.954343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.954529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.954554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.954755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.954920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.954956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.955146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.955339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.955364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.955555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.955738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.955763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.955955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.956119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.956144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.956315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.956514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.956540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.956730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.956913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.956943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.957104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.957319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.957344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.957533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.957730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.957756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.957944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.958110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.958134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.958285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.958473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.958497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.958683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.958892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.958917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.959123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.959289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.959316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.959536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.959731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.959757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.959921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.960129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.960154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.960356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.960548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.960574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.960757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.960949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.960975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.961139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.961319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.961348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.961538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.961747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.961772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.961953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.962154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.962180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.962367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.962563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.962588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.962802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.962955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.962980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.963170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.963333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.963358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.963573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.963762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.963787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.963957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.964142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.964167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.964345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.964529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.964554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.964739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.964925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.964956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.965147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.965317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.965349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.965563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.965752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.910 [2024-05-15 00:41:29.965777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.910 qpair failed and we were unable to recover it. 00:26:03.910 [2024-05-15 00:41:29.965959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.966175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.966201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.966370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.966552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.966577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.966745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.966967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.966993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.967185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.967373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.967398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.967590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.967778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.967803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.967991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.968148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.968174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.968363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.968544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.968569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.968758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.968924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.968953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.969168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.969355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.969382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.969580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.969768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.969793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.969976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.970175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.970201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.970417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.970580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.970605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.970765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.970956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.970982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.971172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.971328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.971353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.971514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.971674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.971700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.971892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.972095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.972121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.972308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.972457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.972482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.972694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.972859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.972884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.973046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.973257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.973283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.973454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.973641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.973666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.973856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.974070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.974097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.974326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.974516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.974542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.974726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.974916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.974953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.975111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.975280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.975306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.975538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.975704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.975730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.975921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.976088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.976114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.976300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.976528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.976554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.976736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.976896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.976921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.977090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.977294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.977320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.977510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.977706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.977732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.977922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.978092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.978118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.978276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.978435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.978460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.978644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.978846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.978871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.979036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.979226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.979252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.979423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.979583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.979609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.979765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.979949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.979975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.980160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.980323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.980348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.980533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.980739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.980766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.980962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.981157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.981183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.981356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.981528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.981561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.981721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.981907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.981937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.982097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.982283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.982312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.982472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.982653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.982679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.982839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.983029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.983056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.983222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.983387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.983413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.983606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.983763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.983788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.983955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.984117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.984143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.984307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.984499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.984525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.984712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.984902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.984928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.985127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.985322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.985350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.985524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.985688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.985713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.985898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.986071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.986099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.986270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.986458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.986484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.986671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.986823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.986848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.987038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.987204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.987229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.987419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.987608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.987633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.987831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.988046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.911 [2024-05-15 00:41:29.988072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.911 qpair failed and we were unable to recover it. 00:26:03.911 [2024-05-15 00:41:29.988235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.988424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.988450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.988616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.988785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.988812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.988971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.989150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.989176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.989338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.989500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.989526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.989716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.989882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.989908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.990101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.990255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.990280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.990480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.990645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.990671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.990860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.991027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.991053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.991260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.991418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.991447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.991636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.991802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.991829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.991996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.992164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.992190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.992376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.992535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.992561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.992773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.992967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.992993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.993185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.993347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.993373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.993540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.993732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.993758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.993952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.994118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.994144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.994373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.994560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.994588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.994756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.994927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.994962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.995127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.995288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.995313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.995482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.995668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.995693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.995852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.996051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.996079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.996298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.996481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.996506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.996672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.996834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.996859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.997018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.997215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.997240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.997409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.997595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.997623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.997788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.997977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.998006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.998169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.998334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.998359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.998563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.998747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.998772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.998968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.999132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.999157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.999345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.999568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.999595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:29.999793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.999956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:29.999983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.000156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.000349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.000376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.001144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.001364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.001396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.001636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.001841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.001872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.002041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.002240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.002266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.002460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.002616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.002642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.002817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.003000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.003027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.003184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.003342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.003370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.003541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.003728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.003754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.003915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.004166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.004195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.004392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.004580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.004606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.004771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.004958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.004989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.005151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.005319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.005345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.005502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.005691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.005716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.005888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.006086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.006113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.006274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.006466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.006491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.006679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.006851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.006876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.007064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.007284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.007310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.007503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.007690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.007715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.007878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.008094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.008132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.008355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.008595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.008650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.009209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.009396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.009424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.912 [2024-05-15 00:41:30.009594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.009762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.912 [2024-05-15 00:41:30.009789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.912 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.010006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.010171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.010196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.010381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.010547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.010573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.010807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.011000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.011027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.011232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.011422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.011448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.011635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.011800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.011825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.012016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.012210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.012236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.012398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.012591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.012616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.012787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.012987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.013014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.013244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.013435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.013459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.013655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.013848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.013873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.014077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.014263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.014289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.014482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.014679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.014706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.014935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.015118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.015144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.015340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.015507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.015534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.015693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.015857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.015883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.016043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.016213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.016238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.016452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.016608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.016634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.016794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.016986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.017012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.017222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.017435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.017462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.017624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.017814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.017839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.018010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.018208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.018234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.018423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.018609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.018640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.018827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.018997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.019023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.019202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.019367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.019392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.019580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.019780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.019806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.020050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.020214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.020240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.020400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.020563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.020588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.020754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.020940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.020966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.021126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.021345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.021370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.021559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.021720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.021746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.021904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.022099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.022127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.022304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.022462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.022494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.022693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.022897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.022922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.023127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.023296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.023321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.023506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.023665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.023690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.023885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.024096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.024122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.024287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.024473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.024498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.024685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.024869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.024895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.025073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.025267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.025294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.025482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.025673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.025698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.025879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.026077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.026103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.026313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.026469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.026494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.026665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.026852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.026879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.027049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.027263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.027289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.027488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.027654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.027679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.027860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.028081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.028107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.028301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.028485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.028510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.028669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.028861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.028887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.029066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.029261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.029286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.029475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.029629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.029654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.029845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.030039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.030065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.030228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.030418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.030444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.030631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.030797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.030823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.030998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.031183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.031209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.031403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.031593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.031618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.031803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.031967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.031993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.032153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.032306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.913 [2024-05-15 00:41:30.032331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.913 qpair failed and we were unable to recover it. 00:26:03.913 [2024-05-15 00:41:30.032488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.032693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.032718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.032940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.033109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.033135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.033309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.033497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.033522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.033683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.033905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.033951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.034123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.034311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.034337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.034523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.034686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.034712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.034939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.035153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.035178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.035345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.035533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.035558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.035762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.035938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.035963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.036129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.036292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.036319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.036483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.036641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.036665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.036828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.036994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.037020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.037244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.037405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.037431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.037603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.037791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.037817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.037991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.038177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.038202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.038403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.038597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.038642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.038842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.039042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.039069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.039292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.039823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.039859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.040037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.040204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.040231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.040402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.040581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.040606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.040764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.040977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.041003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.041192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.041385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.041410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.041603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.041793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.041818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.041978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.042147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.042173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.042375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.042551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.042575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.042778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.043014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.043039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.043213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.043400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.043427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.043593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.043838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.043863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.044039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.044221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.044246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.044432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.044597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.044621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.044817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.045006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.914 [2024-05-15 00:41:30.045033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:03.914 qpair failed and we were unable to recover it. 00:26:03.914 [2024-05-15 00:41:30.045195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.045353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.045380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.045573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.045773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.045797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.046001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.046162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.046187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.046379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.046576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.046601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.046796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.047017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.047045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.047208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.047413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.047439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.047630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.047808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.047833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.048026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.048188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.048213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.048406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.048568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.048592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.048781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.048973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.048999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.049195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.049373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.049397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.049636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.049825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.049850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.050114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.050316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.050341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.050504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.050691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.050716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.050908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.051102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.051128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.051291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.051480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.051505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.051668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.051887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.051912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.052109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.052308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.052333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.052520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.052731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.052755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.052920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.053089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.053114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.053297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.053463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.053488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.053679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.053907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.053937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.054168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.054366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.054391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.054552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.054734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.054759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.054965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.055134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.055159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.055361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.055561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.055586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.055739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.055936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.055962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.056135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.056301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.056327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.182 qpair failed and we were unable to recover it. 00:26:04.182 [2024-05-15 00:41:30.056541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.182 [2024-05-15 00:41:30.056695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.056720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.056884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.057065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.057091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.057287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.057477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.057502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.057695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.057878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.057903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.058130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.058330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.058355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.058557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.058772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.058796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.058995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.059158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.059183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.059399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.059557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.059587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.059796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.059994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.060020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.060239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.060452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.060477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.060678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.060946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.060972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.061135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.061323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.061347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.061541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.061704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.061729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.062007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.062187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.062212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.062405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.062569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.062595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.062808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.062977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.063002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.063215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.063376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.063401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.063618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.063784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.063811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.064008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.064233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.064258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.064420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.064613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.064638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.064832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.065023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.065049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.065202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.065357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.065382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.065536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.065742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.065767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.065956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.066130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.066157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.066352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.066547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.066573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.066735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.066919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.066951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.067157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.067425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.067450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.067666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.067857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.067882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.068053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.068233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.068259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.068451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.068657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.068682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.183 qpair failed and we were unable to recover it. 00:26:04.183 [2024-05-15 00:41:30.068835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.183 [2024-05-15 00:41:30.069026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.069052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.069209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.069367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.069394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.069550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.069748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.069773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.069988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.070178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.070202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.070386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.070651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.070675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.070899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.071081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.071107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.071285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.071482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.071507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.071696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.071886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.071912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.072113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.072291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.072317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.072497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.072688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.072713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.072902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.073098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.073124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.073289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.073451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.073476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.073661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.073852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.073877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.074070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.074248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.074274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.074477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.074657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.074682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.074881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.075083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.075108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.075306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.075498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.075523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.075714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.075896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.075921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.076122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.076311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.076341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.076493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.076648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.076675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.076894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.077064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.077090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.077292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.077490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.077515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.077727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.077916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.077950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.078172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.078367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.078394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.078612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.078769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.078796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.079009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.079180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.079205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.079415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.079638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.079664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.079833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.080046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.080072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.080263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.080487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.080516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.080738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.080934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.184 [2024-05-15 00:41:30.080960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.184 qpair failed and we were unable to recover it. 00:26:04.184 [2024-05-15 00:41:30.081149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.081339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.081364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.081552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.081740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.081765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.081993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.082148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.082173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.082398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.082582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.082607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.082822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.082978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.083004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.083177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.083392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.083417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.083601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.083809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.083834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.084020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.084186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.084211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.084378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.084599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.084624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.084796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.084980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.085005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.085168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.085327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.085352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.085540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.085719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.085745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.085935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.086112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.086139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.086326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.086512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.086537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.086741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.086908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.086940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.087136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.087317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.087343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.087530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.087689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.087715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.087910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.088142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.088168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.088388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.088587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.088612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.088795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.088981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.089007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.089211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.089400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.089425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.089611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.089797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.089822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.089977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.090177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.090202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.090397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.090611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.090636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.090823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.091016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.091058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.091232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.091417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.091442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.091636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.091823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.091848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.092010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.092205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.092230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.092430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.092629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.092654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.092850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.093084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.185 [2024-05-15 00:41:30.093110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.185 qpair failed and we were unable to recover it. 00:26:04.185 [2024-05-15 00:41:30.093327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.093508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.093533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.093745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.093910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.093940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.094163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.094377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.094402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.094596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.094788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.094813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.095005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.095194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.095220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.095412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.095628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.095653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.095823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.096040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.096066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.096229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.096414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.096439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.096632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.096794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.096819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.097033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.097203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.097232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.097420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.097604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.097629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.097816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.097997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.098023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.098182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.098384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.098409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.098569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.098779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.098804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.099046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.099237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.099262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.099418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.099599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.099624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.099805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.099960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.099991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.100212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.100378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.100403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.100588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.100797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.100822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.101004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.101223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.101248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.101419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.101604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.101629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.101817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.102012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.102037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.102231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.102444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.102469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.102673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.102885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.102910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.186 qpair failed and we were unable to recover it. 00:26:04.186 [2024-05-15 00:41:30.103088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.186 [2024-05-15 00:41:30.103281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.103306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.103496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.103682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.103707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.103888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.104088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.104114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.104334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.104545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.104570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.104755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.104912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.104943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.105135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.105304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.105330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.105506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.105721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.105746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.105897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.106104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.106130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.106322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.106535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.106560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.106748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.106912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.106944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.107129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.107287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.107312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.107494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.107709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.107734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.107927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.108103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.108129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.108289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.108504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.108529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.108750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.108972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.108998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.109189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.109405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.109429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.109614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.109801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.109825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.110014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.110184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.110209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.110365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.110554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.110579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.110771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.110955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.110981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.111153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.111348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.111373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.111583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.111800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.111825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.111994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.112168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.112193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.112363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.112580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.112605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.112788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.113005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.113031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.113228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.113420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.113444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.113599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.113795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.113821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.114036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.114203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.114228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.114393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.114549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.114574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.114797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.115012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.115038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.187 qpair failed and we were unable to recover it. 00:26:04.187 [2024-05-15 00:41:30.115231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.187 [2024-05-15 00:41:30.115447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.115471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.115660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.115846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.115871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.116123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.116286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.116312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.116473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.116659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.116686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.116872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.117082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.117108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.117301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.117511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.117537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.117727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.117912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.117947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.118110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.118296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.118321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.118508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.118705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.118730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.118917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.119083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.119108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.119296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.119482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.119507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.119696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.119875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.119900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.120098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.120284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.120309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.120483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.120672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.120697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.120887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.121083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.121110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.121302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.121466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.121493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.121686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.121900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.121925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.122128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.122285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.122310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.122508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.122694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.122719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.122940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.123109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.123134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.123348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.123531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.123557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.123738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.123904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.123936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.124134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.124297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.124322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.124534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.124745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.124769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.124958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.125140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.125165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.125345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.125508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.125532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.125721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.125940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.125965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.126161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.126379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.126404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.126569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.126752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.126777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.126971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.127137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.188 [2024-05-15 00:41:30.127162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.188 qpair failed and we were unable to recover it. 00:26:04.188 [2024-05-15 00:41:30.127350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.127565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.127590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.127804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.127994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.128020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.128205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.128421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.128446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.128607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.128769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.128794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.128958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.129154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.129179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.129382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.129585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.129610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.129822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.130024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.130050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.130234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.130457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.130482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.130642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.130859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.130884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.131070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.131256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.131281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.131470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.131659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.131684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.131867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.132029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.132054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.132221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.132439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.132465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.132652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.132835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.132860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.133049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.133277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.133303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.133488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.133674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.133699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.133895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.134085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.134110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.134279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.134464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.134493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.134683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.134870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.134895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.135091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.135246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.135271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.135459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.135616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.135641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.135801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.135987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.136013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.136227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.136395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.136420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.136632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.136844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.136869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.137059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.137226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.137251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.137432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.137625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.137650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.137812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.138024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.138050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.138241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.138408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.138439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.138625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.138801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.138826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.189 [2024-05-15 00:41:30.139043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.139229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.189 [2024-05-15 00:41:30.139254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.189 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.139414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.139574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.139599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.139756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.139939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.139964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.140151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.140349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.140374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.140566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.140731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.140756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.140956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.141142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.141167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.141349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.141536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.141561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.141780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.141939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.141965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.142120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.142282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.142307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.142524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.142682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.142707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.142925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.143116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.143141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.143299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.143482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.143507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.143664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.143838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.143863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.144045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.144232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.144257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.144449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.144663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.144688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.144875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.145063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.145089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.145253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.145467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.145492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.145680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.145838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.145863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.146056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.146245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.146269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.146453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.146618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.146643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.146832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.146999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.147025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.147214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.147370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.147395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.147585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.147798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.147823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.147987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.148177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.148204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.148393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.148560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.148585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.148804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.148970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.148996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.190 qpair failed and we were unable to recover it. 00:26:04.190 [2024-05-15 00:41:30.149155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.149308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.190 [2024-05-15 00:41:30.149333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.149512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.149726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.149751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.149906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.150088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.150114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.150296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.150516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.150541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.150755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.150951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.150976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.151190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.151386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.151412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.151598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.151781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.151806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.151999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.152157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.152182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.152370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.152580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.152605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.152797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.152978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.153005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.153191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.153354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.153379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.153565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.153713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.153738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.153940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.154107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.154132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.154320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.154504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.154533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.154724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.154941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.154967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.155154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.155343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.155369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.155557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.155717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.155742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.155924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.156107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.156132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.156283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.156473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.156498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.156701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.156864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.156889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.157117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.157280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.157305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.157491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.157653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.157679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.157893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.158055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.158081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.158241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.158428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.158453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.158642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.158804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.158829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.159020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.159231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.159256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.159421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.159579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.159604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.159787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.159977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.160002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.160221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.160410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.160435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.160646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.160827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.191 [2024-05-15 00:41:30.160852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-05-15 00:41:30.161069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.161252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.161277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.161456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.161645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.161670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.161854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.162042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.162067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.162229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.162389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.162414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.162580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.162761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.162786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.162945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.163100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.163125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.163284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.163502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.163527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.163714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.163912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.163942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.164140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.164328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.164353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.164520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.164736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.164760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.164972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.165155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.165180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.165346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.165539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.165564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.165769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.165962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.165987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.166175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.166360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.166384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.166586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.166781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.166806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.166995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.167159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.167186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.167399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.167586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.167611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.167777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.167963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.167988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.168152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.168331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.168356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.168518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.168731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.168756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.168921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.169110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.169135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.169351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.169535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.169560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.169780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.169969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.169994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.170183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.170339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.170364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.170524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.170717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.170742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.170957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.171115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.171142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.171337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.171521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.171546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.171736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.171925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.171955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.172130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.172293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.172318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-05-15 00:41:30.172479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.172649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.192 [2024-05-15 00:41:30.172674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.172860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.173052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.173077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.173269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.173465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.173490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.173682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.173843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.173868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.174024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.174187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.174213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.174398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.174585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.174614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.174802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.174988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.175014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.175205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.175392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.175416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.175608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.175824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.175848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.176040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.176234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.176259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.176474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.176656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.176681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.176837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.177053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.177079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.177239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.177403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.177428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.177589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.177792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.177817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.178006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.178194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.178219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.178431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.178619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.178644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.178833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.179017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.179043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.179253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.179460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.179485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.179639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.179826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.179851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.180042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.180261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.180286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.180504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.180657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.180682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.180894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.181078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.181103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.181293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.181457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.181481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.181696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.181851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.181875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.182042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.182207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.182231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.182416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.182602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.182626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.182819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.183034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.183060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.183250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.183438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.183463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.183678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.183865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.183890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.184083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.184270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.184295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.193 [2024-05-15 00:41:30.184505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.184692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.193 [2024-05-15 00:41:30.184717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.193 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.184922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.185145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.185170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.185384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.185573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.185599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.185759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.185948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.185975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.186140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.186335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.186360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.186519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.186682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.186707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.186890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.187078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.187105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.187289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.187501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.187526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.187738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.187950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.187976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.188137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.188323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.188348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.188532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.188717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.188742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.188959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.189151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.189176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.189335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.189489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.189514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.189700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.189863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.189888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.190053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.190234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.190259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.190469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.190682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.190707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.190862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.191052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.191082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.191263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.191449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.191474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.191684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.191876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.191902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.192074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.192290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.192316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.192504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.192690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.192715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.192926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.193086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.193111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.193277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.193454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.193479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.193646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.193829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.193854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.194044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.194210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.194236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.194446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.194668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.194693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.194912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.195171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.195200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.195415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.195600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.195625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.195817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.196009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.196035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.196228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.196411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.196436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.194 [2024-05-15 00:41:30.196645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.196832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.194 [2024-05-15 00:41:30.196857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.194 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.197077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.197275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.197300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.197494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.197717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.197742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.197896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.198090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.198116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.198268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.198480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.198506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.198668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.198882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.198907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.199071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.199224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.199249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.199446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.199671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.199696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.199883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.200098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.200124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.200287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.200496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.200521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.200704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.200892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.200917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.201118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.201309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.201333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.201488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.201704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.201729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.201883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.202038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.202064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.202254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.202414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.202439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.202654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.202838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.202863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.203056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.203242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.203267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.203463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.203622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.203647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.203830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.204020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.204047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.204232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.204386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.204411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.204601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.204786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.204811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.205027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.205180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.205206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.205359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.205571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.205596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.205807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.205996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.206022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.206243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.206432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.206457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.206642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.206824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.206848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.195 qpair failed and we were unable to recover it. 00:26:04.195 [2024-05-15 00:41:30.207057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.195 [2024-05-15 00:41:30.207246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.207271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.207458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.207680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.207705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.207896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.208106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.208132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.208301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.208487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.208513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.208676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.208868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.208893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.209086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.209249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.209274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.209485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.209639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.209664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.209874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.210087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.210113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.210294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.210486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.210511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.210674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.210884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.210909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.211075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.211266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.211291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.211481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.211675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.211706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.211870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.212088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.212114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.212276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.212439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.212466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.212630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.212803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.212829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 993960 Killed "${NVMF_APP[@]}" "$@" 00:26:04.196 addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.213033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.213196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.213223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.213377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:04.196 [2024-05-15 00:41:30.213561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.213586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.213747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:04.196 [2024-05-15 00:41:30.213971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.213998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:04.196 [2024-05-15 00:41:30.214167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:04.196 [2024-05-15 00:41:30.214364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.214391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.196 [2024-05-15 00:41:30.214575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.214766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.214792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.214990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.215145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.215170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.215348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.215532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.215557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.215719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.215876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.215903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.216135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.216335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.216362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.216573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.216789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.216815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.216980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.217169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.217194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.217361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.217546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.217571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.217729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.217918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.217961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.196 qpair failed and we were unable to recover it. 00:26:04.196 [2024-05-15 00:41:30.218115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.196 [2024-05-15 00:41:30.218293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.218318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.218471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.218666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.218691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.218853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.219009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.219035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.219193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.219380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.219405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.219626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.219811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.219836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=994523 00:26:04.197 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:04.197 [2024-05-15 00:41:30.220042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 994523 00:26:04.197 [2024-05-15 00:41:30.220258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.220285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.220453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 994523 ']' 00:26:04.197 [2024-05-15 00:41:30.220640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.197 [2024-05-15 00:41:30.220666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:04.197 [2024-05-15 00:41:30.220880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.197 [2024-05-15 00:41:30.221053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.221079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:04.197 00:41:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.197 [2024-05-15 00:41:30.221290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.221510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.221535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.221707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.221896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.221922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.222116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.222303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.222328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.222519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.222736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.222764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.222916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.223101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.223126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.223298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.223494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.223520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.223736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.223924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.223956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.224176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.224359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.224385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.224574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.224781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.224807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.224967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.225140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.225166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.225354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.225547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.225573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.225742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.225904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.225935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.226111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.226298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.226325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.226527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.226739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.226765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.226955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.227164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.227189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.227354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.227568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.227594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.227762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.227966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.227993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.228179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.228377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.228402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.228560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.228749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.197 [2024-05-15 00:41:30.228775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.197 qpair failed and we were unable to recover it. 00:26:04.197 [2024-05-15 00:41:30.228968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.229159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.229185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.229400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.229580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.229606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.229804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.230000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.230026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.230209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.230392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.230417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.230633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.230823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.230852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.231025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.231213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.231238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.231408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.231595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.231620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.231819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.232008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.232035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.232224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.232392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.232418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.232603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.232765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.232790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.232986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.233167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.233193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.233398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.233612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.233638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.233822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.234021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.234051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.234266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.234437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.234462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.234674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.234839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.234865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.235045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.235198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.235223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.235400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.235585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.235610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.235771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.235958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.235984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.236142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.236323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.236351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.236520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.236673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.236701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.236869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.237094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.237121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.237284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.237473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.237499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.237717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.237876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.237902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.238075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.238269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.238295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.238525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.238709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.238734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.238928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.239139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.239165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.239351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.239539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.239564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.239767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.239935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.239962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.240155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.240304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.240330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.198 qpair failed and we were unable to recover it. 00:26:04.198 [2024-05-15 00:41:30.240488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.198 [2024-05-15 00:41:30.240684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.240710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.240902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.241100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.241129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.241291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.241461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.241487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.241697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.241884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.241910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.242107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.242263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.242288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.242483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.242645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.242671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.242860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.243029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.243056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.243246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.243432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.243457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.243643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.243806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.243831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.244002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.244166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.244191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.244360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.244549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.244575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.244736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.244927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.244958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.245142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.245333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.245358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.245531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.245718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.245744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.245912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.246084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.246112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.246300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.246506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.246532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.246719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.246891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.246917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.247088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.247301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.247327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.247554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.247715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.247743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.247938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.248133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.248159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.248319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.248505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.248531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.248719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.248910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.248941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.249112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.249276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.249302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.249490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.249650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.249675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.249889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.250116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.250142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.250337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.250524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-05-15 00:41:30.250551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.199 qpair failed and we were unable to recover it. 00:26:04.199 [2024-05-15 00:41:30.250734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.250895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.250920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.251133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.251296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.251322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.251515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.251703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.251728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.251925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.252131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.252157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.252332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.252525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.252551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.252715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.252939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.252965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.253137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.253312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.253339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.253511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.253734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.253759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.253946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.254144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.254175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.254406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.254596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.254621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.254810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.254979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.255007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.255198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.255363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.255388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.255577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.255760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.255785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.255972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.256126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.256151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.256369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.256529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.256555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.256746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.256939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.256965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.257155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.257366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.257399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.257587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.257744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.257770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.257964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.258175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.258200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.258389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.258557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.258583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.258777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.258973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.258999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.259189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.259341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.259368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.259553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.259766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.259791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.259983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.260178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.260205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.260368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.260586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.260612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.260812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.260981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.261008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.261173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.261365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.261391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.261557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.261717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.261742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.261923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.262086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.262113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.200 [2024-05-15 00:41:30.262340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.262497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.200 [2024-05-15 00:41:30.262522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.200 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.262704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.262868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.262893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.263084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.263256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.263282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.263464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.263631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.263656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.263819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.264009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.264036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.264224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.264415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.264442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.264630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.264820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.264845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.265020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.265214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.265239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.265402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.265590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.265620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.265831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.265885] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:26:04.201 [2024-05-15 00:41:30.265981] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.201 [2024-05-15 00:41:30.266000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.266026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.266245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.266430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.266456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.266617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.266780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.266805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.266984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.267170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.267195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.267386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.267571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.267597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.267788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.267956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.267983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.268148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.268322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.268349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.268539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.268725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.268751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.268914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.269090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.269117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.269344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.269501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.269527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.269693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.269879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.269909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.270114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.270278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.270306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.270499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.270714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.270739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.270924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.271115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.271142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.271366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.271579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.271615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.271831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.272039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.272073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.272283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.272521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.272556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.272770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.272952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.272985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.273174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.273411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.273447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.273637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.273846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.273880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.201 [2024-05-15 00:41:30.274097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.274337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.201 [2024-05-15 00:41:30.274370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.201 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.274551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.274754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.274788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.275012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.275206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.275241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.275487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.275683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.275718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.275936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.276142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.276176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.276359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.276543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.276575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.276746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.276945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.276977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.277159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.277367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.277400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.277591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.277810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.277846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.278055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.278269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.278304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.278482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.278680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.278712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.278904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.279097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.279130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.279300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.279477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.279510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.279719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.279897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.279941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.280127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.280302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.280336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.280520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.280751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.280785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.280980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.281163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.281196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.281418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.281605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.281637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.281823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.282009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.282041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.282249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.282431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.282464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.282659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.282846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.282881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.283091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.283298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.283332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.283529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.283737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.283776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.283974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.284156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.284190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.284375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.284549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.284581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.284757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.284949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.284983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.285168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.285377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.285410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.285596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.285784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.285817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.285994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.286175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.286207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.286422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.286637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.286670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.202 qpair failed and we were unable to recover it. 00:26:04.202 [2024-05-15 00:41:30.286887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.202 [2024-05-15 00:41:30.287084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.287119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.287316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.287522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.287556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.287773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.287976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.288009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.288216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.288416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.288450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.288652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.288827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.288860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.289073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.289278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.289309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.289516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.289723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.289758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.289955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.290175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.290208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.290424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.290635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.290670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.290885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.291071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.291104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.291312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.291518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.291550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.291738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.291914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.291956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.292153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.292331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.292363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.292538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.292768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.292802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.292993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.293180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.293213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.293398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.293590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.293622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.293836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.294013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.294050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.294262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.294483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.294521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.294735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.294953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.294988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.295170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.295366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.295402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.295610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.295819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.295852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.296037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.296214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.296248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.296461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.296673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.296707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.296893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.297104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.297137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.297347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.297551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.297591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.297776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.297983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.298020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.298239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.298444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.298477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.298707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.298926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.298976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.299176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.299348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.299382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.299592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.299775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.203 [2024-05-15 00:41:30.299808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.203 qpair failed and we were unable to recover it. 00:26:04.203 [2024-05-15 00:41:30.300020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.300212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.300248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.300456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.300636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.300671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.300892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.301078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.301113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.301287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.301488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.301521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.301701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.301869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.301903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.302122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.302332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.302364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.302540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.302719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.302752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.302928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.303146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.303179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.303403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.303616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.303649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.303852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.304080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.304115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.304311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.304500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.304534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.304724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.304936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.304971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.305161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.204 [2024-05-15 00:41:30.305366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.305399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.305600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.305808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.305843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.306050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.306229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.306263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.306471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.306678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.306712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.306921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.307135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.307169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.307372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.307545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.307577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.307823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.308026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.308061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.308300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.308507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.308539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.308789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.308999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.309034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.309252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.309456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.309485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.309677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.309837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.309863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.310041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.310258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.310284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.310497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.310662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.310689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.204 qpair failed and we were unable to recover it. 00:26:04.204 [2024-05-15 00:41:30.310911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.311079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.204 [2024-05-15 00:41:30.311105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.311273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.311457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.311482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.311671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.311858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.311883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.312088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.312246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.312271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.312429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.312605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.312630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.312823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.312991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.313017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.313205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.313427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.313452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.313637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.313843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.313867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.314047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.314214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.314239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.314394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.314546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.314571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.314793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.315010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.315038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.315200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.315421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.315446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.315642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.315824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.315848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.316034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.316197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.316222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.316382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.316573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.316599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.316815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.316986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.317011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.317207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.317393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.317434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.317611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.317812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.317837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.318037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.318226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.318257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.318455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.318617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.318642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.318826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.318997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.319024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.319218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.319378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.319403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.319566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.319753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.319778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.320004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.320189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.320215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.320368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.320531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.320557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.320739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.320899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.320924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.321091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.321273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.321298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.321488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.321651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.321677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.321838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.322035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.322062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.322244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.322428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.322453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.205 qpair failed and we were unable to recover it. 00:26:04.205 [2024-05-15 00:41:30.322617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.205 [2024-05-15 00:41:30.322826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.322851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.323044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.323197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.323223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.323383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.323590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.323615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.323765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.323916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.323953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.324148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.324345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.324370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.324573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.324759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.324785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.324966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.325121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.325146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.325337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.325496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.325521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.325734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.325918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.325952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.326144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.326308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.326334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.326548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.326735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.326760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.326935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.327125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.327150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.327306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.327523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.327558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.327768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.327967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.327993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.328178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.328342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.328366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.328543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.328695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.328721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.328917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.329086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.329111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.329305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.329515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.329540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.329722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.329944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.329971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.330176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.330406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.330432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.330643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.330809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.330835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.331040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.331204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.331237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.331428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.331649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.331675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.331846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.332035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.332061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.332255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.332449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.332474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.332697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.332869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.332898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.333098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.333261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.333286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.333490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.333662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.206 [2024-05-15 00:41:30.333687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.206 qpair failed and we were unable to recover it. 00:26:04.206 [2024-05-15 00:41:30.333915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.334110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.334135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.334328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.334524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.334548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.334738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.334947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.334973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.335142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.335365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.335391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.335636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.335826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.335853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.336090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.336253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.336279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.336449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.336639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.336665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.336838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.337038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.337064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.337278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.337441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.337467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.337673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.337863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.337893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.338126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.338311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.338338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.338527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.338684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.338709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.338896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.339096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.339121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.339324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.339514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.339541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.339724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.339908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.339950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.340136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.340295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.471 [2024-05-15 00:41:30.340320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.471 qpair failed and we were unable to recover it. 00:26:04.471 [2024-05-15 00:41:30.340510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.340716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.340741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.340900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.341136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.341163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.341332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.341545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.341570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.341754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.342007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.342033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.342221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.342387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.342428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.342610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.342795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.342834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.343058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.343250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.343276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.343472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.343644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.343685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.343883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.344069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.344094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.344260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.344443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.344468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.344688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.344899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.344954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.345123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.345277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.345304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.345526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.345689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.345714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.345874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.346061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.346089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.346253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.346365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:04.472 [2024-05-15 00:41:30.346439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.346463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.346636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.346854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.346879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.347053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.347225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.347256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.347411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.347569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.347594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.347810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.347977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.348002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.348163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.348354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.348379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.348567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.348733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.348758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.348954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.349150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.349176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.349407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.349579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.349604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.349795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.349982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.350009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.472 [2024-05-15 00:41:30.350182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.350379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.472 [2024-05-15 00:41:30.350404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.472 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.350591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.350754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.350780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.350968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.351157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.351182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.351363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.351612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.351638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.351822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.352038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.352064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.352230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.352400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.352427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.352619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.352818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.352843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.353062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.353274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.353300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.353515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.353742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.353767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.353921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.354099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.354124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.354348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.354518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.354545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.354799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.355075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.355101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.355326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.355517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.355543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.355761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.355957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.355983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.356284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.356481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.356507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.356725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.356883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.356908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.357077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.357264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.357301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.357497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.357714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.357740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.357906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.358101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.358127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.358326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.358496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.358521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.358734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.358926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.358958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.359184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.359389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.359414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.359619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.359812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.359837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.360043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.360240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.360266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.360491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.360682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.360707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.360958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.361115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.361141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.361366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.361543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.361568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.473 [2024-05-15 00:41:30.361784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.361976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.473 [2024-05-15 00:41:30.362003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.473 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.362191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.362353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.362383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.362576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.362762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.362788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.363023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.363188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.363219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.363443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.363646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.363672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.363838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.364039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.364066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.364225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.364409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.364434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.364618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.364808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.364833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.365000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.365164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.365189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.365408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.365634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.365658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.365879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.366078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.366104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.366289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.366475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.366502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.366729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.366916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.366947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.367143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.367308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.367333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.367535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.367712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.367737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.367958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.368116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.368142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.368304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.368491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.368524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.368753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.368937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.368964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.369164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.369324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.369350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.369543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.369733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.369758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.369955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.370140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.370165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.370339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.370519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.370544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.370754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.370945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.370972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.371162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.371376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.371401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.371655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.371874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.371899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.372096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.372284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.372312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.372516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.372709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.474 [2024-05-15 00:41:30.372748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.474 qpair failed and we were unable to recover it. 00:26:04.474 [2024-05-15 00:41:30.372967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.373177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.373203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.373427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.373617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.373642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.373854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.374078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.374104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.374263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.374451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.374477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.374671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.374854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.374879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.375131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.375287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.375312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.375501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.375724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.375749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.375941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.376139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.376165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.376354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.376550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.376576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.376764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.376924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.376958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.377124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.377306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.377332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.377545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.377706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.377731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.377943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.378126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.378151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.378350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.378539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.378565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.378725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.378881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.378908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.379120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.379340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.379377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4ffc000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.379576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.379764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.379801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.380040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.380200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.380238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.380479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.380635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.380662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.380876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.381064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.381093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.381288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.381455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.381483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.381686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.381850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.381876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.382048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.382215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.382255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.382448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.382610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.382636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.382810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.382982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.383011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.383246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.383460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.383486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.383669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.383863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.475 [2024-05-15 00:41:30.383901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.475 qpair failed and we were unable to recover it. 00:26:04.475 [2024-05-15 00:41:30.384113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.384329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.384361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.384555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.384749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.384775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.384982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.385176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.385202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.385394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.385587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.385614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.385809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.385975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.386001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.386192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.386417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.386446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.386660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.386851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.386877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.387060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.387248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.387274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.387434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.387626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.387653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.387903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.388110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.388138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.388331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.388561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.388592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.388819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.389008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.389035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.389229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.389414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.389440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.389594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.389812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.389837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.390060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.390223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.390261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.390468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.390659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.390684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.390882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.391091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.391117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.391280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.391483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.391510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.391704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.391859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.476 [2024-05-15 00:41:30.391886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.476 qpair failed and we were unable to recover it. 00:26:04.476 [2024-05-15 00:41:30.392084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.392246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.392272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.392503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.392664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.392695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.392851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.393060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.393087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.393320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.393525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.393550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.393746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.393939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.393965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.394161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.394359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.394386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.394603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.394793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.394818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.395020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.395208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.395244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.395417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.395574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.395614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.395806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.395975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.396002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.396190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.396394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.396427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.396585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.396780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.396806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.397020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.397179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.397205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.397406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.397569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.397601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.397801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.397988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.398014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.398202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.398410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.398445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.398633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.398818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.398844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.399018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.399208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.399235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.399404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.399596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.399622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.399814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.400004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.400031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.400186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.400378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.400404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.400595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.400763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.400789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.400991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.401164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.401190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.401371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.401561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.401588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.401759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.401922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.477 [2024-05-15 00:41:30.401955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.477 qpair failed and we were unable to recover it. 00:26:04.477 [2024-05-15 00:41:30.402145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.402315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.402340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.402498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.402657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.402684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.402841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.403025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.403051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.403209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.403375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.403401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.403572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.403730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.403756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.403928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.404129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.404156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.404322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.404511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.404544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.404737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.404922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.404957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.405132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.405288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.405313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.405499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.405684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.405710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.405900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.406109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.406135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.406356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.406542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.406568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.406755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.406948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.406974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.407183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.407414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.407440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.407627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.407786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.407810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.408009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.408169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.408195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.408362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.408550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.408575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.408737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.408951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.408977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.409140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.409369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.409403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.409603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.410589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.410620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.410880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.411052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.411079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.411245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.411437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.411470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.411688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.411892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.411918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.412101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.412288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.412314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.412503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.412706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.412737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.412900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.413077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.413103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.478 qpair failed and we were unable to recover it. 00:26:04.478 [2024-05-15 00:41:30.413261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.478 [2024-05-15 00:41:30.413425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.413479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.413730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.413920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.413958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.414125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.414324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.414350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.414510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.414697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.414735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.414958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.415150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.415176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.415343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.415540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.415565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.415777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.416030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.416056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.416244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.416465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.416490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.416664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.416860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.416894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.417072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.417287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.417313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.417501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.417697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.417724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.417926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.418091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.418118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.418316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.418470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.418495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.418680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.418884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.418910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.419115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.419283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.419309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.419503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.419714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.419742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.419946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.420137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.420163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.420360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.420543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.420568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.420756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.420954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.420980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.421147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.421336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.421364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.421619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.421819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.421846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.422044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.422226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.422253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.422437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.422614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.422641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.422812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.423006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.423034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.423259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.423448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.423475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.423696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.423870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.423896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.424068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.424258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.424285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.479 qpair failed and we were unable to recover it. 00:26:04.479 [2024-05-15 00:41:30.424474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.424642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.479 [2024-05-15 00:41:30.424668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.424826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.425017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.425044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.425212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.425426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.425452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.425614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.425781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.425807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.425998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.426156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.426182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.426367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.426545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.426570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.426729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.426952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.426978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.427162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.427373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.427399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.427586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.427783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.427808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.427973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.428179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.428205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.428416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.428576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.428602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.428795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.429019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.429046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.429217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.429403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.429429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.429597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.429768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.429794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.429999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.430159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.430186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.430387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.430576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.430602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.430792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.430969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.431007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.431203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.431405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.431430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.431623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.431824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.431850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.432045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.432217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.432264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.432477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.432663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.432689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.432875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.433037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.433064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.433221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.433419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.433445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.433637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.433821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.433847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.434035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.434199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.434224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.434426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.434653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.434679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.480 qpair failed and we were unable to recover it. 00:26:04.480 [2024-05-15 00:41:30.434872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.480 [2024-05-15 00:41:30.435066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.435092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.435252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.435443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.435468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.435662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.435817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.435843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.436103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.436297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.436323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.436505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.436666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.436694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.436878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.437047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.437075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.437258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.437445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.437471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.437661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.437836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.437862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.438040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.438229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.438256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.438413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.438617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.438643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.438813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.439003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.439029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.439221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.439418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.439444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.439633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.439793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.439819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.440004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.440208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.440241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.440442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.440609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.440636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.440830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.440998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.441025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.481 [2024-05-15 00:41:30.441217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.441446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.481 [2024-05-15 00:41:30.441472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.481 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.441664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.441850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.441876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.442062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.442253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.442280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.442471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.442664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.442691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.442875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.443056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.443082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.443269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.443441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.443483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.443679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.443893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.443919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.444113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.444315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.444340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.444542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.444762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.444787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.444952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.445157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.445183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.445347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.445549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.445574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.445738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.445902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.445927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.446158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.446376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.446405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.446626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.446815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.446840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.447030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.447218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.447244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.447416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.447619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.447645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.447834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.448023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.448049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.448216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.448373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.448399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.448586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.448748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.448775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.448973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.449191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.449218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.449386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.449573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.449600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.449797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.450008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.450034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.450196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.450360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.450390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.450588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.450754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.450780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.450969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.451171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.451197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.451363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.451552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.451579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.482 [2024-05-15 00:41:30.451774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.451945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.482 [2024-05-15 00:41:30.451973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.482 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.452162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.452327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.452352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.452509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.452667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.452693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.452858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.453032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.453059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.453216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.453405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.453432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.453620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.453837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.453863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.454051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.454273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.454304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.454513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.454735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.454761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.454953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.455159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.455185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.455356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.455543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.455569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.455737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.455921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.455953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.456125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.456286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.456313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.456478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.456696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.456723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.456911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.457113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.457139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.457332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.457522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.457547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.457751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.457937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.457963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.458181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.458353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.458378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.458588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.458769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.458795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.458961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.459153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.459178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.459374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.459557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.459583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.459776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.459967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.459995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.460153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.460336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.460361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.460517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.460711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.460737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.460924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.461155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.461181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.461388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.461576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.461602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.461764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.461954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.461981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.462176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.462355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.462380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.462541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.462717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.483 [2024-05-15 00:41:30.462745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.483 qpair failed and we were unable to recover it. 00:26:04.483 [2024-05-15 00:41:30.462905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.463119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.463145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.463304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.463492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.463517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.463742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.463941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.463967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.464151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.464365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.464390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.464581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.464735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.464760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.464949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.465137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.465162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.465372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.465586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.465611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.465802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.465994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.466021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.466191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.466377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.466403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.466568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.466732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.466757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.466951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.467132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.467159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.467326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.467491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.467516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.467734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.467892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.467917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.468131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.468283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.468309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.468496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.468660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.468685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.468846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.469107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.469134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.469293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.469478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.469503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.469661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.469864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.469889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.470086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.470248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.470275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.470449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.470626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.470651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.470808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.470964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.470991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.471143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.471326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.471351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.471596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.471748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.471773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.471928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.472016] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.484 [2024-05-15 00:41:30.472050] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.484 [2024-05-15 00:41:30.472064] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.484 [2024-05-15 00:41:30.472076] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.484 [2024-05-15 00:41:30.472086] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.484 [2024-05-15 00:41:30.472094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.472119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.484 qpair failed and we were unable to recover it. 00:26:04.484 [2024-05-15 00:41:30.472135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:04.484 [2024-05-15 00:41:30.472278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.484 [2024-05-15 00:41:30.472161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:04.484 [2024-05-15 00:41:30.472191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:04.484 [2024-05-15 00:41:30.472194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:04.485 [2024-05-15 00:41:30.472460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.472485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.472639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.472855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.472881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.473181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.473340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.473371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.473537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.473701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.473728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.473886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.474052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.474078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.474244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.474449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.474475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.474665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.474832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.474857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.475035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.475202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.475227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.475425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.475583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.475610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.475787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.475957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.475983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.476152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.476347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.476373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.476526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.476700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.476725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.476910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.477076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.477107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.477388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.477687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.477714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.477918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.478107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.478133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.478282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.478478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.478505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.478685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.478907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.478941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.479122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.479316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.479342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.479518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.479703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.479729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.479898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.480115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.480142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.480368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.480525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.480550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.480703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.480886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.480912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.481090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.481255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.481285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.481569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.481730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.481755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.481922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.482098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.482123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.482315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.482523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.482549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.482716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.482877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.482903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.485 qpair failed and we were unable to recover it. 00:26:04.485 [2024-05-15 00:41:30.483061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.485 [2024-05-15 00:41:30.483222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.483248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.483438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.483644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.483669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.483860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.484064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.484091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.484255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.484427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.484453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.484628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.484787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.484815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.484979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.485134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.485159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.485371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.485576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.485601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.485755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.485970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.485997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.486149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.486313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.486340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.486499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.486684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.486710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.486910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.487085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.487111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.487293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.487450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.487476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.487640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.487797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.487823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.488001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.488163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.488190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.488373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.488547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.488573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.488783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.488995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.489027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.489199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.489365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.489392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.489548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.489741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.489767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.489928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.490141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.490168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.490350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.490509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.490535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.486 qpair failed and we were unable to recover it. 00:26:04.486 [2024-05-15 00:41:30.490721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.490921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.486 [2024-05-15 00:41:30.490952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.491115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.491296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.491322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.491482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.491642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.491668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.491951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.492124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.492150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.492346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.492503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.492528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.492679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.492830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.492856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.493030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.493197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.493224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.493390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.493553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.493579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.493762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.493939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.493965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.494131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.494296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.494322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.494486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.494646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.494672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.494861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.495025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.495052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.495228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.495396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.495423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.495611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.495777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.495802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.495977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.496170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.496196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.496375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.496526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.496552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.496724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.496878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.496904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.497082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.497243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.497270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.497439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.497617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.497643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.497805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.497967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.497993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.498159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.498329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.498354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.498511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.498674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.498700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.498862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.499039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.499066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.499235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.499454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.499480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.499638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.499792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.499822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.499992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.500152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.500178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.487 [2024-05-15 00:41:30.500378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.500565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.487 [2024-05-15 00:41:30.500591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.487 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.500754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.500934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.500960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.501137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.501299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.501325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.501504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.501673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.501699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.501862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.502056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.502083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.502242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.502404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.502429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.502611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.502771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.502798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.502993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.503145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.503171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.503431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.503632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.503658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.503840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.504009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.504036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.504200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.504355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.504381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.504546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.504717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.504742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.504920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.505098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.505124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.505285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.505452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.505478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.505643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.505830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.505856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.506055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.506241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.506267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.506472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.506801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.506827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.506988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.507199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.507225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.507391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.507555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.507582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.507774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.507947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.507973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.508141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.508329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.508356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.508522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.508807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.508833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.509000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.509170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.509197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.509349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.509559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.509586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.509738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.509925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.509956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.510122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.510302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.510328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.510495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.510665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.510691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.488 [2024-05-15 00:41:30.510844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.511036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.488 [2024-05-15 00:41:30.511062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.488 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.511245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.511414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.511440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.511617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.511798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.511824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.512000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.512259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.512285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.512473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.512681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.512706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.512908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.513084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.513110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.513270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.513429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.513455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.513625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.513806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.513831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.514000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.514156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.514182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.514335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.514529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.514554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.514858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.515038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.515064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.515258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.515432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.515457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.515718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.515942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.515968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.516127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.516287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.516312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.516478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.516688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.516713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.516867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.517048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.517076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.517242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.517433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.517458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.517654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.517825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.517850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.518026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.518223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.518248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.518403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.518559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.518586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.518767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.518925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.518956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.519127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.519299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.519325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.519482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.519640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.519668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.519860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.520047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.520075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.520265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.520431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.520457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.520619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.520810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.520835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.521003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.521191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.521216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.521383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.521591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.489 [2024-05-15 00:41:30.521618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.489 qpair failed and we were unable to recover it. 00:26:04.489 [2024-05-15 00:41:30.521783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.521952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.521978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.522164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.522320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.522347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.522517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.522679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.522705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.522862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.523063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.523089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.523244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.523435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.523460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.523655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.523836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.523861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.524044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.524216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.524242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.524406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.524714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.524740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.524946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.525114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.525139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.525299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.525476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.525501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.525655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.525816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.525841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.526031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.526196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.526223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.526405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.526563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.526589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.526755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.526917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.526947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.527113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.527286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.527312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.527485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.527672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.527698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.527861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.528042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.528069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.528235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.528443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.528468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.528637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.528842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.528867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.529031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.529201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.529226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.529387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.529564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.529589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.529753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.529960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.529986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.530185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.530366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.530392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.530547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.530705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.530730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.530892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.531060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.531086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.490 [2024-05-15 00:41:30.531243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.531401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.490 [2024-05-15 00:41:30.531431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.490 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.531591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.531775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.531800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.531983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.532141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.532167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.532331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.532519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.532544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.532749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.532941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.532967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.533157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.533340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.533365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.533548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.533738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.533763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.533921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.534103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.534129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.534285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.534454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.534480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.534636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.534787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.534813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.534980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.535156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.535186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.535389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.535551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.535577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.535761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.535926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.535958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.536116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.536303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.536328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.536480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.536694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.536720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.536923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.537134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.537159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.537348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.537500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.537525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.537684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.537852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.537877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.538034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.538199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.538226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.538386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.538551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.538578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.538774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.538959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.538989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.539162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.539326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.539352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.491 [2024-05-15 00:41:30.539515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.539699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.491 [2024-05-15 00:41:30.539725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.491 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.539891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.540075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.540101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.540262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.540452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.540478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.540651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.540811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.540838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.541018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.541183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.541209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.541401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.541559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.541586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.541759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.541922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.541956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.542151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.542318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.542344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.542533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.542703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.542728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.542908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.543098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.543124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.543279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.543462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.543487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.543648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.543813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.543840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.544029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.544185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.544211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.544377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.544556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.544582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.544777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.544962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.544988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.545150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.545346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.545372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.545548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.545705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.545731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.545913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.546104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.546130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.546307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.546472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.546497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.546679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.546871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.546896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.547062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.547227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.547252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.547417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.547569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.547595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.547749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.547909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.547942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.548113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.548274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.548299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.548483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.548688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.548713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.548909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.549080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.549108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.549275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.549451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.549476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.492 [2024-05-15 00:41:30.549672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.549852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.492 [2024-05-15 00:41:30.549878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.492 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.550236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.550417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.550445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.550658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.550837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.550864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.551051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.551206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.551232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.551391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.551562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.551587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.551774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.551943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.551969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.552148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.552314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.552340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.552541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.552702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.552728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.552963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.553156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.553181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.553341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.553539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.553565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.553771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.553969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.553996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.554255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.554439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.554465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.554680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.554839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.554864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.555037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.555222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.555248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.555409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.555574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.555599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.555778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.555981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.556007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.556174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.556356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.556381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.556540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.556723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.556748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.556905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.557097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.557123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.557310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.557519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.557544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.557741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.557928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.557961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.558162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.558324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.558350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.558539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.558722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.558747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.558942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.559105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.559131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.559292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.559492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.559518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.559680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.559844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.559870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.560032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.560218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.560244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.560397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.560550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.560575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.493 qpair failed and we were unable to recover it. 00:26:04.493 [2024-05-15 00:41:30.560763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.560965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.493 [2024-05-15 00:41:30.560991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.561151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.561345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.561370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.561546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.561707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.561732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.561892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.562068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.562094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.562264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.562452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.562478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.562658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.562833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.562858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.563050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.563224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.563249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.563452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.563637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.563662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.563846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.564027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.564054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.564231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.564404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.564430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.564587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.564772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.564797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.564985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.565145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.565171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.565327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.565503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.565528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.565697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.565886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.565913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.566103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.566272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.566300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.566535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.566699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.566725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.566885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.567080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.567107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.567265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.567447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.567472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.567629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.567793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.567818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.567982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.568134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.568160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.568368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.568526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.568551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.568744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.568899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.568926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.569115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.569300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.569325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.569493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.569683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.569709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.569891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.570084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.570113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.494 qpair failed and we were unable to recover it. 00:26:04.494 [2024-05-15 00:41:30.570285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.494 [2024-05-15 00:41:30.570472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.570498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.570684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.570849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.570876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.571051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.571245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.571270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.571425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.571583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.571608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.571757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.571951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.571978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.572132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.572312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.572338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.572519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.572705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.572731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.572902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.573076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.573102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.573259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.573446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.573472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.573655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.573822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.573848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.574038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.574195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.574220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.574403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.574585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.574610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.574770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.574944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.574970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.575135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.575297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.575323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.575478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.575641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.575666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.575855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.576015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.576041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.576200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.576390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.576415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.576572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.576719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.576744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.576921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.577093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.577118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.577287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.577475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.577504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.577669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.577829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.577854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.578026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.578179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.578205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.578359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.578546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.578572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.578731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.578911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.578943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.579138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.579334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.579359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.579514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.579676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.579701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.495 qpair failed and we were unable to recover it. 00:26:04.495 [2024-05-15 00:41:30.579860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.580025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.495 [2024-05-15 00:41:30.580052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.580222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.580387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.580413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.580602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.580817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.580843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.581009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.581166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.581191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.581387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.581545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.581570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.581750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.581939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.581964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.582145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.582331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.582356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.582537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.582725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.582751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.582912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.583072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.583098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.583254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.583444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.583469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.583625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.583809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.583834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.583998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.584165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.584190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.584354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.584511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.584536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.584713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.584895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.584920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.585102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.585267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.585295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.585477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.585643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.585669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.585827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.586004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.586030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.586190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.586390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.586415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.586601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.586765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.586792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.586974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.587137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.587164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.587326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.587489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.587514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.587683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.587864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.587889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.588081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.588248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.588274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.588441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.588593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.588618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.588773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.588981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.589008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.496 [2024-05-15 00:41:30.589200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.589356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.496 [2024-05-15 00:41:30.589383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.496 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.589572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.589724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.589750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.589908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.590094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.590119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.590298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.590461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.590486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.590646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.590813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.590838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.590996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.591187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.591212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.591399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.591595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.591620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.591806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.591998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.592024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.592205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.592405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.592430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.592633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.592805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.592831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.593028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.593195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.593221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.593400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.593583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.593608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.593801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.593959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.593985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.594175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.594354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.594379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.594609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.594786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.594811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.595029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.595182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.595208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.595374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.595561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.595587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.595735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.595919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.595950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.596138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.596324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.596349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.596548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.596731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.596761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.596951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.597139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.597165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.597317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.597523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.597548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.597755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.597938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.597964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.598124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.598309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.598334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.598518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.598721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.598746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.598939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.599094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.599119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.599279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.599444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.599469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.599623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.599809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.599834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.600099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.600286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.600312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.600493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.600701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.600726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.600882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.601039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.601066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.497 qpair failed and we were unable to recover it. 00:26:04.497 [2024-05-15 00:41:30.601334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.497 [2024-05-15 00:41:30.601494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.601519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.601694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.601857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.601882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.602072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.602249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.602274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.602442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.602599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.602624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.602888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.603077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.603103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.603313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.603496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.603521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.603675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.603859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.603886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.604081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.604262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.604287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.604469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.604622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.604647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.604804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.604981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.605008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.605167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.605351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.605376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.605555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.605744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.605769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.605956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.606110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.606135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.606325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.606494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.606521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.606704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.606861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.606886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.607072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.607255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.607280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.607471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.607632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.607657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.607815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.608080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.608106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.608261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.608418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.608443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.608602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.608790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.608817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.609029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.609237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.609263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.609446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.609622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.609647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.609824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.609984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.610010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.610201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.610352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.610377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.610567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.610736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.610761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.611030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.611230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.611255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.611444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.611657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.611683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.611836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.612017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.612042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.612233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.612388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.612413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.612585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.612771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.612800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.612966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.613153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.498 [2024-05-15 00:41:30.613178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.498 qpair failed and we were unable to recover it. 00:26:04.498 [2024-05-15 00:41:30.613327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.613518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.613543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.613703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.613898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.613923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.614098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.614261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.614286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.614461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.614619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.614644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.614816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.615016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.615041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.615204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.615386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.615411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.615565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.615750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.615776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.615963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.616168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.616194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.616398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.616574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.616600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.616781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.616971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.616997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.617173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.617335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.617360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.617536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.617721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.617746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.617934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.618123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.618148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.618332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.618520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.618545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.618709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.618865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.618892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.619055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.619245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.619270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.619425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.619578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.619603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.619756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.619909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.619939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.620095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.620269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.620294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.620504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.620684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.620709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.620864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.621040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.621066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.621252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.621406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.621431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.621618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.621768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.621793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.621962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.622126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.622151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.622345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.622528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.622553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.622733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.622944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.622970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.623174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.623384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.623412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.623621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.623801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.623827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.624002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.624183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.624208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.624401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.624573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.624598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.499 qpair failed and we were unable to recover it. 00:26:04.499 [2024-05-15 00:41:30.624788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.499 [2024-05-15 00:41:30.624983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.500 [2024-05-15 00:41:30.625009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.500 qpair failed and we were unable to recover it. 00:26:04.500 [2024-05-15 00:41:30.625216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.500 [2024-05-15 00:41:30.625399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.500 [2024-05-15 00:41:30.625425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.500 qpair failed and we were unable to recover it. 00:26:04.500 [2024-05-15 00:41:30.625591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.500 [2024-05-15 00:41:30.625761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.500 [2024-05-15 00:41:30.625787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.500 qpair failed and we were unable to recover it. 00:26:04.500 [2024-05-15 00:41:30.625966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.500 [2024-05-15 00:41:30.626168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.500 [2024-05-15 00:41:30.626193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.500 qpair failed and we were unable to recover it. 00:26:04.500 [2024-05-15 00:41:30.626385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.500 [2024-05-15 00:41:30.626597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.500 [2024-05-15 00:41:30.626624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.500 qpair failed and we were unable to recover it. 00:26:04.500 [2024-05-15 00:41:30.626805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.500 [2024-05-15 00:41:30.626970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.500 [2024-05-15 00:41:30.626997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.500 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.627157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.627356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.627381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.627537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.627697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.627725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.627945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.628131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.628157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.628328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.628521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.628550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.628710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.628901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.628936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.629125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.629285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.629310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.629471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.629661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.629687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.629874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.630065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.630094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.630249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.630459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.630485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.630657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.630833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.630858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.631071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.631262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.631288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.631492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.631678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.631703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.631886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.632094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.632120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.632273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.632430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.632460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.632675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.632866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.632891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.633050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.633212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.633246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.633434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.633601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.633627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.633807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.633980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.634006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.634160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.634319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.634355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.634518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.634699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.634724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.634882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.635035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.635061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.635242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.635446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.635472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.635634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.635821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.635846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.636022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.636210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.636236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.636425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.636617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.636643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.636792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.636998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.637024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.637211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.637388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.637414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.637569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.637759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.637785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.637976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.638162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.638187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.638393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.638558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.638583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.638769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.638957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.777 [2024-05-15 00:41:30.638993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.777 qpair failed and we were unable to recover it. 00:26:04.777 [2024-05-15 00:41:30.639176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.639328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.639353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.639566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.639721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.639746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.639903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.640058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.640084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.640299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.640470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.640496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.640682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.640843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.640868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.641138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.641361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.641387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.641585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.641795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.641821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.642015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.642200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.642225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.642419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.642623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.642649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.642823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.642979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.643006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.643191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.643378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.643403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.643568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.643749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.643774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.643952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.644157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.644183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.644345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.644515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.644541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.644747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.644927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.644958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.645116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.645296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.645321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.645506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.645714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.645739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.645899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.646090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.646117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.646288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.646477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.646504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.646666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.646839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.646864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.647071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.647229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.647254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.647465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.647647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.647673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.647827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.648018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.648044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.648226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.648415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.648440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.648595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.648810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.648836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.648990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.649138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.649164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.649382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.649547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.649573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.649728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.649903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.649934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.650149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.650300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.650327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.778 [2024-05-15 00:41:30.650482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.650633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.778 [2024-05-15 00:41:30.650658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.778 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.650876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.651066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.651093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.651256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.651433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.651458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.651646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.651852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.651876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.652027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.652177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.652205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.652362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.652541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.652567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.652727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.652906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.652935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.653100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.653246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.653271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.653461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.653639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.653664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.653816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.653977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.654003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.654188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.654370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.654395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.654558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.654736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.654762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.654970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.655151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.655177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.655350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.655538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.655563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.655728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.655926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.655956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.656156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.656340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.656365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.656552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.656741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.656766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.656935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.657097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.657122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.657276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.657424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.657450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.657606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.657770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.657796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.657973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.658132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.658158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.658338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.658512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.658537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.658726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.658883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.658909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.659102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.659281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.659307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.659464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.659645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.659671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.659862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.660046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.660072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.660249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.660425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.660450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.660618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.660801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.660826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.661015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.661196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.661221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.661405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.661587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.661612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.779 [2024-05-15 00:41:30.661799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.661963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.779 [2024-05-15 00:41:30.661989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.779 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.662169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.662320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.662345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.662524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.662703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.662728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.662918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.663110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.663135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.663322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.663501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.663526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.663707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.663902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.663927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.664099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.664278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.664303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.664513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.664702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.664727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.664882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.665071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.665097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.665253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.665415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.665444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.665608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.665762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.665787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.665972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.666155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.666181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.666366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.666520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.666546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.666732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.666955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.666992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.667155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.667308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.667333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.667515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.667696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.667725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.667877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.668063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.668090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.668282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.668438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.668464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.668617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.668800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.668826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.668986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.669141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.669166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.669315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.669524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.669549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.669707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.669868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.669893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.670082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.670255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.670281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.670464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.670647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.670673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.670884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.671097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.671123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.671327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.671497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.671523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.671712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.671891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.671917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.672105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.672271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.672296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.672484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.672695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.672720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.672876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.673027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.673053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.780 qpair failed and we were unable to recover it. 00:26:04.780 [2024-05-15 00:41:30.673242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.780 [2024-05-15 00:41:30.673403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.673430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.673649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.673805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.673830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.673986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.674141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.674166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.674355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.674538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.674563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.674724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.674890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.674916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.675090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.675245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.675270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.675422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.675599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.675624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.675814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.675971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.675997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.676154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.676345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.676370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.676545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.676749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.676774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.676943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.677109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.677134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.677296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.677454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.677480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.677647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.677802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.677827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.678010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.678177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.678202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.678356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.678567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.678592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.678747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.678954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.678991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.679172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.679343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.679369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.679558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.679739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.679765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.679975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.680130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.680155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.680319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.680479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.680504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.680655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.680820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.680844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.681012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.681200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.681225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.681386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.681539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.681564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.681717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.681898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.681923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.682106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.682283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.781 [2024-05-15 00:41:30.682308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.781 qpair failed and we were unable to recover it. 00:26:04.781 [2024-05-15 00:41:30.682494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.682670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.682695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.682878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.683061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.683088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.683248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.683406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.683431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.683582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.683726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.683751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.683903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.684090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.684116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.684293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.684478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.684503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.684693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.684845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.684870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.685071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.685251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.685276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.685454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.685637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.685661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.685818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.685980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.686005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.686210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.686382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.686407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.686612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.686762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.686791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.686974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.687133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.687158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.687328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.687485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.687511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.687664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.687843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.687868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.688058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.688240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.688264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.688430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.688585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.688609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.688814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.688972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.688998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.689154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.689312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.689337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.689502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.689663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.689688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.689840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.690000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.690026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.690215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.690368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.690394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.690585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.690742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.690766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.690918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.691087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.691112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.691276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.691455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.691480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.691670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.691830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.691855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.692044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.692201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.692226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.692402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.692552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.692578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.692729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.692891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.692916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.693081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.693238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.782 [2024-05-15 00:41:30.693263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.782 qpair failed and we were unable to recover it. 00:26:04.782 [2024-05-15 00:41:30.693436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.693616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.693641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.693814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.694001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.694027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.694184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.694380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.694406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.694556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.694746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.694771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.694951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.695138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.695163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.695321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.695501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.695526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.695710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.695924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.695954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.696128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.696289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.696314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.696495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.696678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.696703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.696880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.697064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.697090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.697251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.697401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.697427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.697580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.697764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.697789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.697967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.698129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.698155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.698350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.698500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.698525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.698710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.698864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.698889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.699137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.699291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.699316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.699501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.699651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.699676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.699841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.700034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.700060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.700249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.700405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.700430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.700640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.700847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.700873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.701078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.701237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.701262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.701445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.701625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.701650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.701822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.702013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.702039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.702204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.702358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.702383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.702547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.702735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.702760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.702952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.703163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.703188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.703368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.703522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.703546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.703732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.703903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.703928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.704124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.704305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.704331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.704494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.704680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.783 [2024-05-15 00:41:30.704705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.783 qpair failed and we were unable to recover it. 00:26:04.783 [2024-05-15 00:41:30.704890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.705054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.705080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.705290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.705462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.705487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.705673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.705853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.705883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.706065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.706218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.706243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.706432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.706585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.706610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.706759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.706941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.706974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.707157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.707311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.707336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.707493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.707679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.707704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.707883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.708080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.708106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.708273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.708430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.708455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.708621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.708792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.708817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.709002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.709154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.709179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.709387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.709533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.709558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.709745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.709903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.709935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.710096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.710252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.710277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.710467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.710625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.710650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.710803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.710958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.710984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.711134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.711330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.711355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.711511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.711661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.711686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.711865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.712063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.712089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.712246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.712407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.712432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.712611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.712784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.712809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.713027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.713213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.713238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.713401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.713600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.713625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.713795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.714006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.714033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.714248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.714438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.714463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.714619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.714775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.714801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.715012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.715224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.715249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.715412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.715573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.715601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.715765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.715948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.784 [2024-05-15 00:41:30.715976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.784 qpair failed and we were unable to recover it. 00:26:04.784 [2024-05-15 00:41:30.716177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.716331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.716356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.716544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.716717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.716742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.716914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.717116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.717141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.717348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.717557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.717582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.717732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.717917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.717960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.718134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.718325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.718351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.718534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.718686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.718712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.718922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.719087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.719112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.719300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.719490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.719516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.719680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.719833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.719858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.720012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.720216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.720241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.720455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.720604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.720629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.720807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.720967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.720993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.721178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.721333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.721364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.721526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.721715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.721740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.721939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.722102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.722127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.722280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.722463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.722488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.722679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.722835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.722861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.723016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.723179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.723204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.723357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.723506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.723531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.723728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.723917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.723950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.724170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.724349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.724379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.724576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.724739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.724765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.724948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.725132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.725157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.725328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.725503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.725529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.785 [2024-05-15 00:41:30.725697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.725881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.785 [2024-05-15 00:41:30.725906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.785 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.726070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.726276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.726301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.726467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.726655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.726680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.726843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.727001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.727028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.727215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.727375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.727401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.727623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.727773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.727799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.728014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.728192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.728218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.728405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.728577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.728603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.728786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.728951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.728977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.729166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.729344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.729371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.729582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.729754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.729779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.729948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.730137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.730163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.730327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.730487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.730513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.730724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.730915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.730945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.731104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.731258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.731285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.731474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.731627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.731653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.731837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.732008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.732034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.732213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.732369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.732394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.732549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.732705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.732731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.732917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.733092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.733118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.733286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.733504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.733530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.733690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.733851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.733876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.734083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.734244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.734272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.734435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.734586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.734611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.734774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.734974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.735000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.735183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.735338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.735365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.735531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.735751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.735777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.735945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.736113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.736138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.736327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.736485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.736513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.736700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.736919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.736950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.786 [2024-05-15 00:41:30.737110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.737271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.786 [2024-05-15 00:41:30.737296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.786 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.737509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.737694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.737720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.737879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.738057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.738083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.738301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.738460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.738487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.738679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.738851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.738876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.739083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.739276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.739302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.739473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.739621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.739647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.739810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.739996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.740023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.740205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.740395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.740421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.740624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.740806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.740836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.741003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.741181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.741207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.741370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.741554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.741579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.741733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.741893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.741920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.742082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.742286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.742311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.742484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.742641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.742668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.742829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.743010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.743037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.743196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.743378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.743404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.743569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.743738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.743763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.743941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.744133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.744160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.744338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.744493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.744518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.744690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.744872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.744898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.745094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.745256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.745282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.745441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.745602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.745628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.745820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.745977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.746003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.746195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.746383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.746409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.746569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.746764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.746790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.746959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.747119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.747146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.747336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.747522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.747547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.747752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.747978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.748004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.748171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.748324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.748350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.787 [2024-05-15 00:41:30.748510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.748668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.787 [2024-05-15 00:41:30.748694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.787 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.748880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.749049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.749075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.749273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.749468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.749493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.749676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.749848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.749874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.750053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.750213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.750238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.750432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.750588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.750614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.750798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.750960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.750986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.751167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.751352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.751378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.751559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.751716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.751742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.751947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.752122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.752148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.752329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.752498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.752525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.752698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.752853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.752879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.753039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.753214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.753239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.753409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.753600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.753626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.753788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.753984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.754011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.754167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.754346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.754372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.754541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.754720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.754746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.754899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.755070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.755096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.755253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.755436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.755464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.755617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.755810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.755835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.756010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.756210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.756236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.756401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.756550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.756576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.756767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.756952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.756978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.757184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.757345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.757370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.757546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.757728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.757753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.757941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.758111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.758137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.758291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.758455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.758481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.758632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.758818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.758845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.759015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.759211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.759236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.759388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.759556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.759581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.788 [2024-05-15 00:41:30.759752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.759905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.788 [2024-05-15 00:41:30.759944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.788 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.760108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.760271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.760297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.760479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.760631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.760656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.760839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.761008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.761034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.761227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.761382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.761407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.761569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.761741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.761768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.761942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.762124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.762150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.762333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.762517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.762542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.762701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.762861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.762886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.763077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.763248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.763274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.763432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.763593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.763619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.763837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.764018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.764044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.764204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.764390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.764416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.764603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.764789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.764814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.764995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.765195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.765220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.765376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.765558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.765583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.765731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.765920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.765950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.766118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.766291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.766316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.766519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.766711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.766736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.766891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.767099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.767124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.767325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.767505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.767531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.767700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.767861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.767886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.768095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.768308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.768334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.768507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.768696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.768722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.768873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.769054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.769080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.769228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.769396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.769421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.769634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.769799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.769825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.769990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.770146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.770172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.770369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.770521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.770547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.770704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.770915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.789 [2024-05-15 00:41:30.770947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.789 qpair failed and we were unable to recover it. 00:26:04.789 [2024-05-15 00:41:30.771114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.771297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.771323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.771489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.771681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.771705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.771864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.772030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.772058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.772220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.772380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.772406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.772596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.772783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.772808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.772983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.773135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.773161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.773345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.773527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.773552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.773703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.773888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.773913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.774119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.774304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.774330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.774510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.774666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.774691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.774849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.775026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.775052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.775222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.775390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.775419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.775605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.775760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.775785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.775974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.776161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.776186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.776346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.776527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.776553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.776741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.776896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.776922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.777096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.777278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.777303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.777474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.777637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.777662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.777841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.778029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.778055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.778223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.778397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.778423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.778591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.778780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.778805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.778966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.779156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.779182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.779372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.779608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.779635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.779801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.779958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.779986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.780147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.780325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.780351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.790 qpair failed and we were unable to recover it. 00:26:04.790 [2024-05-15 00:41:30.780547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.790 [2024-05-15 00:41:30.780729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.780755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.780981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.781135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.781161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.781319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.781484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.781510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.781693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.781847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.781872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.782031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.782210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.782236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.782397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.782606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.782632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.782794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.782959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.782985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.783175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.783329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.783354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.783551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.783734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.783759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.783939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.784127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.784152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.784302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.784486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.784512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.784675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.784839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.784864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.785034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.785224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.785250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.785408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.785570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.785595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.785773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.785928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.785959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.786124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.786303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.786329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.786491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.786639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.786665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.786838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.786999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.787025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.787185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.787348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.787373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.787535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.787694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.787721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.787909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.788099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.788125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.788326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.788546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.788572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.788751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.788904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.788934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.789137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.789321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.789346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.789511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.789661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.789687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.789841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.790038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.790064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.790214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.790400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.790426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.790585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.790798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.790823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.790987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.791186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.791213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.791370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.791553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.791578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.791 qpair failed and we were unable to recover it. 00:26:04.791 [2024-05-15 00:41:30.791796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.791 [2024-05-15 00:41:30.791960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.791993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.792190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.792351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.792376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.792536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.792695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.792722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.792899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.793093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.793120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.793283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.793442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.793468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.793665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.793856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.793880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.794084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.794263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.794288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.794478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.794685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.794717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.794882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.795045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.795074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.795237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.795430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.795455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.795615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.795827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.795853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.796022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.796187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.796212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.796434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.796602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.796629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.796790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.796954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.796979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.797136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.797331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.797357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.797517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.797711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.797736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.797897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.798094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.798120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.798279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.798470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.798496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.798656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.798835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.798860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.799033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.799212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.799238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.799393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.799548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.799573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.799729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.799890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.799916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.800087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.800260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.800285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.800446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.800602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.800628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.800794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.800979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.801006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.801202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.801381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.801407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.801603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.801759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.801784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.801949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.802109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.802134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.802300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.802491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.802518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.802706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.802871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.802896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.792 qpair failed and we were unable to recover it. 00:26:04.792 [2024-05-15 00:41:30.803073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.792 [2024-05-15 00:41:30.803275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.803300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.803461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.803610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.803635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.803817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.804001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.804027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.804181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.804346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.804371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.804532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.804723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.804749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.804934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.805132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.805157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.805330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.805488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.805513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.805695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.805906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.805935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.806107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.806295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.806320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.806502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.806706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.806731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.806913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.807082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.807107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.807295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.807456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.807483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.807675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.807859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.807884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.808116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.808278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.808303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.808459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.808645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.808669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.808853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.809023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.809048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.809248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.809406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.809431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.809590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.809775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.809801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.809952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.810133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.810158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.810350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.810545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.810571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.810780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.810954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.810981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.811129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.811320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.811346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.811501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.811710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.811734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.811897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.812087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.812113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.812335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.812519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.812544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.812722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.812903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.812927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.813132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.813318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.813343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.813511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.813702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.813727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.813914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.814090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.814119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.814277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.814449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.814474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.793 qpair failed and we were unable to recover it. 00:26:04.793 [2024-05-15 00:41:30.814655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.793 [2024-05-15 00:41:30.814814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.814840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.815015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.815168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.815193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.815343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.815521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.815547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.815721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.815923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.815954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.816115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.816274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.816299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.816455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.816644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.816669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.816824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.817008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.817034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.817195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.817385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.817410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.817569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.817762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.817788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.817953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.818122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.818148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.818303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.818489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.818514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.818699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.818911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.818963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.819140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.819312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.819337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.819501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.819713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.819738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.819889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.820052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.820077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.820266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.820475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.820499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.820688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.820844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.820869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.821023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.821238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.821263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.821421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.821610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.821636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.821791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.821979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.822005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.822167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.822351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.822377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.822536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.822697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.822721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.822902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.823092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.823117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.823300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.823512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.823537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.823712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.823895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.823921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.794 [2024-05-15 00:41:30.824094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.824257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.794 [2024-05-15 00:41:30.824282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.794 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.824439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.824604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.824630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.824815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.824996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.825022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.825173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.825331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.825356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.825552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.825736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.825761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.825933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.826125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.826150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.826344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.826524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.826549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.826746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.826902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.826927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.827106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.827281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.827306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.827468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.827651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.827676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.827822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.828003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.828029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.828207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.828380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.828405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.828583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.828762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.828787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.829000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.829177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.829202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.829383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.829586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.829616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.829799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.830002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.830028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.830193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.830352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.830377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.830590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.830770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.830795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.830952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.831142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.831168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.831339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.831516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.831541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.831725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.831877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.831902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.832091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.832277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.832302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.832474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.832678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.832703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.832864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.833048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.833074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.833252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.833430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.833455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.833631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.833789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.833814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.834018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.834215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.834240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.834399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.834558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.834584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.834787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.834946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.834972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.835151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.835343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.835368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.795 [2024-05-15 00:41:30.835526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.835716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.795 [2024-05-15 00:41:30.835740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.795 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.835916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.836123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.836149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.836297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.836464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.836489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.836694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.836876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.836902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.837092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.837280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.837305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.837491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.837676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.837701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.837863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.838050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.838078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.838238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.838423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.838448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.838661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.838847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.838872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.839039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.839228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.839253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.839430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.839586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.839611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.839796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.839961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.839988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.840184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.840365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.840390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.840571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.840751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.840776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.840977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.841157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.841182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.841367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.841528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.841556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.841769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.841962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.841988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.842138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.842342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.842367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.842548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.842730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.842755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.842955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.843156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.843181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.843344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.843535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.843560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.843717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.843878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.843903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.844089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.844278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.844305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.844485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.844656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.844681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.844951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.845110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.845135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.845321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.845511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.845536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.845709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.845900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.845925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.846100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.846259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.846284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.846454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.846637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.846662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.846868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.847086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.847112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.796 qpair failed and we were unable to recover it. 00:26:04.796 [2024-05-15 00:41:30.847277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.847440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.796 [2024-05-15 00:41:30.847467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.847627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.847810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.847834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.848021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.848298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.848323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.848503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.848715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.848741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.848906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.849078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.849104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.849312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.849491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.849520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.849682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.849833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.849859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.850020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.850203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.850228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.850384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.850555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.850580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.850733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.851002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.851028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.851217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.851402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.851427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.851615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.851829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.851854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.852040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.852219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.852244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.852440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.852588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.852613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.852820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.852995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.853021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.853210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.853393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.853418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.853584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.853760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.853785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.853985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.854146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.854171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.854381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.854549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.854577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.854765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.854951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.854977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.855245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.855401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.855426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.855612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.855795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.855820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.855979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.856182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.856207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.856396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.856660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.856685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.856834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.857098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.857124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.857279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.857488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.857513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.857705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.857861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.857886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.858056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.858245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.858270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.858452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.858607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.858632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.858835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.859013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.859039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.797 qpair failed and we were unable to recover it. 00:26:04.797 [2024-05-15 00:41:30.859202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.797 [2024-05-15 00:41:30.859394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.859420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.859620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.859773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.859798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.859988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.860253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.860277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.860464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.860620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.860645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.860807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.860981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.861006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.861199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.861375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.861400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.861589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.861859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.861884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.862064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.862280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.862305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.862463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.862670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.862696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.862862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.863051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.863077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.863345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.863520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.863544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.863732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.863884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.863909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.864111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.864288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.864313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.864503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.864770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.864795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.864984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.865174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.865199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.865384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.865568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.865593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.865752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.865909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.865939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.866122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.866289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.866315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.866503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.866661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.866686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.866836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.867009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.867035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.867231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.867410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.867435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.867647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.867847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.867872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.868084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.868269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.868294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.868476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.868639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.868666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.868862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.869044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.869070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.869253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.869410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.869435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.869620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.869780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.869809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.869982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.870175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.870200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.870354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.870512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.870539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.870687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.870893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.870918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.798 qpair failed and we were unable to recover it. 00:26:04.798 [2024-05-15 00:41:30.871127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.871282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.798 [2024-05-15 00:41:30.871307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.871489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.871678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.871703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.871888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.872066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.872092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.872266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.872443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.872468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.872622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.872811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.872836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.872991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.873142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.873167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.873358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.873547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.873572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.873764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.873945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.873971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.874158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.874312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.874337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.874523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.874706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.874732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.874888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.875080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.875105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.875294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.875467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.875492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.875663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.875838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.875863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.876021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.876169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.876195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.876374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.876594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.876619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.876779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.876937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.876963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.877143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.877325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.877350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.877545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.877732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.877757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.877921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.878093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.878118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.878271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.878476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.878501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.878688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.878849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.878874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.879058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.879216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.879243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.879409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.879567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.879592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.879765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.879951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.879977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.880144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.880295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.880320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.880497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.880644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.880669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.799 qpair failed and we were unable to recover it. 00:26:04.799 [2024-05-15 00:41:30.880855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.799 [2024-05-15 00:41:30.881037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.881062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.881275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.881429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.881454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.881640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.881812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.881837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.882032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.882224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.882249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.882442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.882632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.882657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.882828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.883017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.883043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.883228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.883383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.883408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.883620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.883772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.883797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.883990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.884187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.884212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.884372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.884534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.884559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.884718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.884900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.884926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.885106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.885278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.885308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.885493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.885668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.885692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.885900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.886092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.886118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.886275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.886467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.886493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.886651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.886839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.886865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.887024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.887207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.887232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.887417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.887589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.887614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.887798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.887989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.888015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.888203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.888363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.888389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.888577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.888741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.888767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.888922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.889082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.889108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.889293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.889560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.889585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.889852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.890005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.890031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.890197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.890372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.890397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.890617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.890778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.890803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.891069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.891243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.891267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.891454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.891642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.891667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.891820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.892032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.892057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.892213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.892390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.892415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.892590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.892741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.892766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.800 [2024-05-15 00:41:30.892915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.893076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.800 [2024-05-15 00:41:30.893101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.800 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.893325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.893481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.893506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.893721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.893897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.893922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.894100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.894311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.894336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.894523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.894708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.894733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.894919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.895117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.895143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.895305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.895492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.895517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.895669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.895871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.895896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.896111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.896291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.896316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.896477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.896640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.896665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.896837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.896997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.897023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.897076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ad0b0 (9): Bad file descriptor 00:26:04.801 [2024-05-15 00:41:30.897322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.897497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.897525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.897709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.897899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.897924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.898124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.898303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.898328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.898512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.898674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.898700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.898870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.899036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.899062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.899227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.899393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.899420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.899580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.899788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.899813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.899983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.900147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.900174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.900351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.900513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.900538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.900730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.900905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.900935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.901100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.901277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.901302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.901454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.901628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.901653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.901834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.902026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.902052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.902215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.902392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.902417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.902624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.902808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.902833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.902989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.903173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.903198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.903361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.903515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.903541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.903699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.903857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.903882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.904077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.904285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.904310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.904464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.904648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.904673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.904865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.905039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.905066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.801 [2024-05-15 00:41:30.905235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.905412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.801 [2024-05-15 00:41:30.905437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.801 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.905647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.905828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.905853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.906014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.906166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.906191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.906401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.906590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.906616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.906771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.906928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.906960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.907156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.907322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.907347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.907530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.907682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.907708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.907906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.908102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.908128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.908306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.908512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.908538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.908712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.908861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.908886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.909122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.909289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.909316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.909504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.909679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.909704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.909934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.910125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.910150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.910309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.910471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.910497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.910656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.910835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.910860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.911033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.911218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.911243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.911430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.911623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.911650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.911865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.912021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.912047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.912238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.912426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.912450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.912616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.912780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.912804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.912993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.913149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.913174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.913387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.913552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.913577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.913731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.913887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.913914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.914102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.914286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.914310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.914498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.914711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.914736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.914922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.915095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.915120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.915281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.915469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.915494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.915679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.915854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.915878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.916043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.916199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.916224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.916412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.916602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.916628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.916807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.916972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.916998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.917159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.917316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.917342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.917502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.917683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.802 [2024-05-15 00:41:30.917708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.802 qpair failed and we were unable to recover it. 00:26:04.802 [2024-05-15 00:41:30.917869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.918061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.918087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.803 qpair failed and we were unable to recover it. 00:26:04.803 [2024-05-15 00:41:30.918242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.918432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.918457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.803 qpair failed and we were unable to recover it. 00:26:04.803 [2024-05-15 00:41:30.918613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.918772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.918797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.803 qpair failed and we were unable to recover it. 00:26:04.803 [2024-05-15 00:41:30.918988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.919142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.919167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.803 qpair failed and we were unable to recover it. 00:26:04.803 [2024-05-15 00:41:30.919339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.919494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.919519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.803 qpair failed and we were unable to recover it. 00:26:04.803 [2024-05-15 00:41:30.919694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.919847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.803 [2024-05-15 00:41:30.919873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:04.803 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.920065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.920233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.920260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.920534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.920717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.920743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.920905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.921184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.921211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.921385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.921541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.921568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.921757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.921922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.921954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.922111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.922289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.922315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.922500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.922685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.922711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.922916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.923108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.923133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.923353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.923562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.923587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.923772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.923956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.923983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.924151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.924309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.924335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.924525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.924743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.924769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.924935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.925146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.925172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.925359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.925520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.925544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.925759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.925926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.925956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.926121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.926306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.926331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.926497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.926653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.926679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.926885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.927074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.927099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.927253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.927436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.927461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.927654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.927825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.927850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.928013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.928223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.928253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.928441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.928651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.928676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.072 qpair failed and we were unable to recover it. 00:26:05.072 [2024-05-15 00:41:30.928832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.072 [2024-05-15 00:41:30.928998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.929024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.929229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.929384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.929409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.929582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.929754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.929779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.929976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.930142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.930167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.930349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.930562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.930587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.930775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.930957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.930983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.931166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.931360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.931385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.931598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.931768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.931793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.931950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.932149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.932174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.932353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.932565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.932590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.932751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.932942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.932970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.933123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.933302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.933328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.933519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.933697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.933722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.933933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.934107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.934132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.934287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.934473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.934498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.934648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.934801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.934826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.935005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.935191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.935217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.935397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.935580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.935605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.935760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.935913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.935944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.936108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.936294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.936320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.936511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.936664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.936690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.936843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.937018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.937044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.937202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.937414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.937440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.937625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.937781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.937806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.937965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.938120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.938146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.938412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.938598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.938624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.938812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.939004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.939030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.939186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.939346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.939371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.939639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.939824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.939849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.940039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.940233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.940260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.940445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.940617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.940642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.940830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.941023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.941049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.941234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.941439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.941464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.941616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.941794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.941819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.942010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.942174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.942200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.942376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.942592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.942618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.073 qpair failed and we were unable to recover it. 00:26:05.073 [2024-05-15 00:41:30.942799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.942991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.073 [2024-05-15 00:41:30.943018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.943192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.943375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.943400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.943610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.943823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.943848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.944035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.944188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.944218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.944402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.944586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.944611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.944760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.944914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.944947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.945137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.945296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.945322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.945504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.945699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.945724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.945890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.946055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.946081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.946272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.946539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.946563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.946830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.947018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.947043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.947228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.947416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.947441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.947627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.947786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.947810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.947964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.948146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.948175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.948361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.948543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.948568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.948716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.948903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.948928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.949118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.949272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.949297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.949452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.949627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.949652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.949806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.949969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.949996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.950165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.950352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.950377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.950525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.950685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.950710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.950893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.951057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.951082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.951247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.951400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.951425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.951609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.951784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.951809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.951996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.952155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.952181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.952366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.952547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.952572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.952723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.952934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.952960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.953228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.953441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.953466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.953621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.953788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.953814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.954024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.954190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.954215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.954402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.954589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.954614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.954797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.954956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.954982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.955160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.955344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.955369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.955559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.955746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.955771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.955946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.956151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.956176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.956370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.956558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.956583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.956851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.957038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.957063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.957246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.957410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.957438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.957624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.957808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.957833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.957989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.958138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.958164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.958317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.958499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.958524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.958678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.958949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.958974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.959160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.959345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.959370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.959566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.959832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.959857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.960047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.960216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.960241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.960430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.960579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.960604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.960818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.961008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.961033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.961218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.961391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.961416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.961572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.961760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.961785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.961942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.962104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.962131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.962348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.962530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.962555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.962733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.962922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.962952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.963120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.963274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.963301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.074 qpair failed and we were unable to recover it. 00:26:05.074 [2024-05-15 00:41:30.963485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.074 [2024-05-15 00:41:30.963671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.963698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.963856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.964072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.964102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.964263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.964426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.964451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.964654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.964805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.964830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.965045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.965228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.965253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.965466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.965627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.965652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.965813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.965995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.966021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.966205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.966389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.966414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.966602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.966785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.966811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.966987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.967137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.967165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.967377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.967565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.967590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.967778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.967951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.967977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.968168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.968328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.968353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.968512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.968672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.968699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.968861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.969044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.969070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.969226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.969403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.969428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.969590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.969741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.969767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.969953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.970139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.970164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.970347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.970533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.970558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.970769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.970960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.970986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.971147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.971330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.971356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.971541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.971700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.971727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.971945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.972104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.972129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.972288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.972455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.972480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.972675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.972886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.972911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.973085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.973243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.973268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.973419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.973598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.973623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.973777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.973963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.973989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.974176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.974370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.974397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.974550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.974724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.974750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.974907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.975063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.975089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.975274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.975435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.975461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.975666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.975877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.975902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.976109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.976277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.976303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.976524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.976693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.976719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.976895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.977062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.977090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.977283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.977473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.977498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.977656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.977823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.977849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.978017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.978199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.978225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.978412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.978571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.978598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.075 [2024-05-15 00:41:30.978805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.978976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.075 [2024-05-15 00:41:30.979003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.075 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.979163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.979321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.979348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.979544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.979708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.979734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.979910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.980078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.980104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.980270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.980435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.980460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.980612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.980767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.980792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.980947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.981133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.981160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.981345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.981501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.981528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.981705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.981906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.981938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.982155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.982302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.982327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.982478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.982640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.982666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.982849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.983034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.983061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.983218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.983407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.983436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.983594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.983783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.983809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.983992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.984152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.984178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.984340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.984489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.984514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.984696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.984906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.984936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.985098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.985271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.985297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.985461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.985653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.985678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.985841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.986012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.986040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.986206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.986389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.986414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.986583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.986749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.986776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.986950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.987107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.987133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.987330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.987503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.987529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.987747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.987914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.987955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.988119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.988281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.988307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.988457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.988664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.988689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.988887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.989050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.989076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.989233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.989393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.989418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.989577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.989731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.989757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.989953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.990164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.990190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.990348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.990503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.990529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.990727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.990882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.990908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.991082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.991260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.991285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.991470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.991620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.991645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.991832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.992008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.992034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.992203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.992389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.992415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.992574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.992723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.992748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.992900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.993070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.993095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.993250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.993406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.993431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.993613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.993797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.993823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.993983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.994141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.994167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.076 [2024-05-15 00:41:30.994332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.994503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.076 [2024-05-15 00:41:30.994528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.076 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.994707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.994869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.994895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.995071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.995228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.995254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.995426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.995575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.995600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.995764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.995934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.995960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.996112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.996298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.996323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.996522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.996683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.996709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.996891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.997078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.997104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.997281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.997440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.997465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.997658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.997814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.997838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.998019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.998169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.998195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.998349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.998542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.998571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.998747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.998906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.998936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.999101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.999269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.999295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.999473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.999681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:30.999707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:30.999862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.000068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.000094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.000259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.000424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.000451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.000609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.000784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.000809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.001001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.001171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.001196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.001357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.001543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.001569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.001744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.001925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.001958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.002151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.002351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.002381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.002528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.002712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.002737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.002888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.003083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.003109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.003270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.003444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.003470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.003650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.003824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.003849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.004040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.004203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.004229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.004392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.004544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.004569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.004753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.004950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.004976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.005157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.005329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.005355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.005539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.005693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.005718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.005871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.006043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.006069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.006252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.006436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.006461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.006653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.006816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.006843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.007010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.007165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.007191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.007354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.007540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.007565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.007718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.007880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.007907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.008132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.008303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.008333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.008498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.008653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.008679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.008837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.008995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.009022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.009185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.009362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.009387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.009569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.009724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.009750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.009941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.010124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.010150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.010307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.010463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.010489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.010679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.010859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.010884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.077 [2024-05-15 00:41:31.011058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.011212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.077 [2024-05-15 00:41:31.011238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.077 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.011422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.011575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.011600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.011784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.011950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.011978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.012169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.012329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.012355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.012543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.012701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.012727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.012888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.013049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.013075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.013262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.013442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.013467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.013634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.013786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.013811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.013986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.014178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.014205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.014419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.014624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.014650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.014815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.014979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.015005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.015162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.015324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.015351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.015547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.015710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.015736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.015928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.016096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.016122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.016282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.016550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.016577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.016748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.016938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.016966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.017133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.017315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.017342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.017521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.017701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.017742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.017948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.018132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.018170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.018375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.018553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.018590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.018793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.018960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.018988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.019148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.019326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.019354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.019523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.019707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.019733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.019882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.020050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.020077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.020249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.020431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.020458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.020614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.020805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.020831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.021003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.021166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.021192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.021371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.021542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.021568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.021746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.021942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.021968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.022127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.022395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.022420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.022638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.022799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.022824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.022995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.023175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.023200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.023364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.023521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.023546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.023764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.024033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.024059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.078 qpair failed and we were unable to recover it. 00:26:05.078 [2024-05-15 00:41:31.024329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.024516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.078 [2024-05-15 00:41:31.024541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.024708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.024877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.024903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.025112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.025267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.025292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.025454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.025639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.025664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.025816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.025969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.025995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.026183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.026340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.026365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.026530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.026696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.026723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.026912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.027107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.027133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.027305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.027574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.027600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.027788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.027968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.027995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.028156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.028322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.028347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.028535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.028699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.028725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.028912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.029086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.029112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.029264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.029481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.029506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.029667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.029836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.029863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.030030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.030192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.030217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.030374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.030531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.030556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.030746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.030898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.030924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.031097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.031269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.031294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.031451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.031642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.031667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.031833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.032003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.032030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.032187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.032349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.032374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.032542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.032704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.032729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.032913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.033119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.033145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.033312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.033469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.033494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.033657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.033844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.033870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.034045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.034214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.034240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.034404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.034595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.034620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.034793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.034970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.034996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.035153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.035341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.035366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.035525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.035701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.035726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.035906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.036074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.036100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.036259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.036451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.036477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.036746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.036898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.036957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.037178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.037379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.037417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.037642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.037854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.037891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.038089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.038295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.038323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.038486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.038641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.038666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.038834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.039008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.039036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.039195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.039377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.039402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.039563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.039732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.039756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.079 qpair failed and we were unable to recover it. 00:26:05.079 [2024-05-15 00:41:31.039945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.040099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.079 [2024-05-15 00:41:31.040125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.040286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.040474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.040511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.040694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.040910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.040960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.041161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.041345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.041381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.041595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.041810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.041847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.042056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.042238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.042275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.042483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.042687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.042724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.042918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.043089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.043116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.043277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.043461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.043486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.043703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.043911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.043945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.044107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.044307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.044343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5004000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.044526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.044702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.044731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.044895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.045067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.045099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.045284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.045467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.045492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.045682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.045837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.045862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.046040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.046229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.046255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.046425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.046587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.046612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.046773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.046939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.046965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.047146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.047334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.047359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.047528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.047716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.047743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.047917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.048083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.048108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.048272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.048494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.048519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.048699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.048856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.048886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.049077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.049260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.049285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.049495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.049652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.049677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.049854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.050017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.050043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.050202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.050388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.050413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.050602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.050765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.050791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.050972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.051128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.051153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.051332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.051508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.051532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.051692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.051850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.051875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.052061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.052217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.052243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.052411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.052575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.052600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.052796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.052955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.052981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.053136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.053291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.053317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.053471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.053637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.053662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.053825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.053988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.054014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.054192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.054356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.054381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.054559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.054727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.054754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.054910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.055098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.055123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.055307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.055459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.055484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.055668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.055840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.055865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.056055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.056211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.056237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.056391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.056581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.080 [2024-05-15 00:41:31.056607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.080 qpair failed and we were unable to recover it. 00:26:05.080 [2024-05-15 00:41:31.056807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.056955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.056981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.057150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.057307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.057334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.057486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.057676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.057701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.057883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.058041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.058067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.058263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.058423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.058448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.058663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.058815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.058840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.058998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.059191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.059217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.059407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.059587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.059614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.059797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.059988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.060013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.060174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.060339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.060366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.060560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.060749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.060774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.060965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.061149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.061175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.061338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.061527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.061553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.061734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.061887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.061912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.062101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.062257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.062283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.062462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.062619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.062646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.062842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.063020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.063046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.063206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.063366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.063392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.063579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.063730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.063755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.063936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.064105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.064130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.064308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.064482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.064507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.064664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.064834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.064859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.065031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.065199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.065224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.065411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.065572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.065599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.065781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.065957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.065983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.066180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.066364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.066389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.066544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.066700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.066726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.066881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.067044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.067071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.067267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.067446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.067472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.067653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.067808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.067835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.067999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.068154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.068180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.068331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.068518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.068544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.068701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.068859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.068886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.069068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.069250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.069275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.069463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.069619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.069644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.069808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.069963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.069990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.070187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.070335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.070360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.070552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.070750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.070775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.070947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.071111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.071136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.081 qpair failed and we were unable to recover it. 00:26:05.081 [2024-05-15 00:41:31.071290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.081 [2024-05-15 00:41:31.071441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.071466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.071658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.071815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.071840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.072012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.072174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.072199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.072385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.072536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.072562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.072723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.072879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.072904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.073069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.073221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.073247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.073434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.073620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.073646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.073835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.074000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.074026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.074212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.074368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.074393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.074554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.074711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.074736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.074895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.075066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.075093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.075248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.075406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.075431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.075579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.075731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.075756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.075934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.076110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.076136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.076299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.076462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.076489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.076645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.076804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.076829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.076994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.077151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.077176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.077360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.077511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.077536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.077693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.077870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.077896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.078059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.078244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.078269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f500c000b90 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.078446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.078623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.078652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.078834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.078995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.079023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.079188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.079343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.079369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.079552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.079751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.079777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.079935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.080093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.080119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.080292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.080507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.080533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.080693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.080854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.080881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.081049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.081211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.081237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.081396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.081551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.081577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.081730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.081892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.081918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.082082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.082279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.082305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.082517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.082675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.082702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.082868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.083033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.083059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.083225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.083392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.083419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.083577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.083731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.083757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.083938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.084100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.084125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.084282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.084441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.084468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.084634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.084785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.084811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.085000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.085155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.085181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.085362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.085523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.085548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.085765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.085917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.085955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.086116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.086290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.086317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.086481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.086639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.086665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.086853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.087002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.087030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.082 [2024-05-15 00:41:31.087188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.087350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.082 [2024-05-15 00:41:31.087376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.082 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.087533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.087690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.087717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.087870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.088044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.088070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.088225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.088433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.088458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.088619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.088775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.088801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.088963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.089120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.089146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.089307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.089492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.089518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.089677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.089851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.089876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.090042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.090196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.090222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.090383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.090541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.090566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.090725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.090893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.090920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.091088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.091241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.091266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.091420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.091568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.091594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.091751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.091946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.091972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.092159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.092311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.092336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.092529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.092688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.092713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.092872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.093031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.093057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.093215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.093375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.093402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.093566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.093754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.093781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.093956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.094118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.094145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.094334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.094491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.094516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.094670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.094815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.094840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.095023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.095206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.095232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.095422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.095604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.095629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.095812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.095995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.096022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.096176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.096390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.096415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.096573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.096730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.096756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.096920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.097109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.097135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.097323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.097511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.097536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.097725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.097875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.097900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.098065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.098236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.098261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.098448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.098611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.098638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.098804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.098989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.099016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.099195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.099347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.099373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.099533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.099716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.099741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.099905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.100071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.100097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.100262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.100419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.100444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.100606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.100782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.100811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.101003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.101160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.101187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.101381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.101537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.101565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.101736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.101883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.101909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.102082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.102244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.102269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.102456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.102635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.102661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.102820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.103004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.103030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.103215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.103398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.103424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.083 [2024-05-15 00:41:31.103585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.103758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.083 [2024-05-15 00:41:31.103783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.083 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.103948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.104097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.104122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.104279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.104460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.104490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.104667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.104862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.104888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.105097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.105253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.105279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.105464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.105646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.105671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.105825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.105980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.106006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.106193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.106352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.106376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.106551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.106709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.106734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.106939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.107100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.107127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.107310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.107516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.107542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.107697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.107849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.107874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.108048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.108232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.108258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.108456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.108614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.108641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.108802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.108976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.109002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.109189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.109354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.109380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.109539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.109726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.109752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.109953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.110112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.110138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.110320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.110481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.110506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.110662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.110847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.110873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.111042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.111211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.111236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.111409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.111569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.111596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.111763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.111914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.111946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.112124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.112288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.112314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.112469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.112651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.112676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.112861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.113060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.113087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.113257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.113409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.113435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.113619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.113769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.113794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.113985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.114139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.114165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.114351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.114536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.114560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.114750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.114975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.115002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.115178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.115338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.115363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.115557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.115722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.115748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.115909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.116097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.116124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.116287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.116496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.116522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.116682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.116836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.116862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.117022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.117186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.117212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.117372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.117533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.117558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.117719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.117880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.117905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.084 [2024-05-15 00:41:31.118080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.118236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.084 [2024-05-15 00:41:31.118261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.084 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.118433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.118590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.118614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.118784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.118944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.118973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.119134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.119281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.119306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.119458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.119668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.119697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.119857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.120023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.120049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.120225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.120381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.120406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.120559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.120715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.120740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.120913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.121111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.121137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.121296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.121470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.121496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.121678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.121825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.121850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.122011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.122197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.122223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.122378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.122531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.122557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.122735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.122921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.122953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.123111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.123284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.123309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.123476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.123637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.123664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.123851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.124013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.124040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.124232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.124411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.124436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.124590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.124766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.124791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.124985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.125140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.125168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.125347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.125526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.125553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.125731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.125905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.125940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.126097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.126279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.126304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.126498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.126651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.126677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.126836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.127088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.127118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.127275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.127444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.127469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.127655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.127817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.127842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.128004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.128171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.128196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.128357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.128517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.128542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.128737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.128906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.128938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.129097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.129268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.129294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.129458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.129663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.129688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.129877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.130033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.130061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.130246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.130402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.130431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.130601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.130761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.130786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.130957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.131115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.131142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.131298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.131480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.131506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.131689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.131850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.131876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.132074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.132261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.132287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.132444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.132605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.132630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.132785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.132942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.132968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.133155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.133327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.133352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.133535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.133726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.133754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.133942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.134120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.134146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.134326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.134499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.085 [2024-05-15 00:41:31.134524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.085 qpair failed and we were unable to recover it. 00:26:05.085 [2024-05-15 00:41:31.134679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.134836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.134861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.135076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.135239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.135265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.135429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.135584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.135609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.135769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.135934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.135962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.136124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.136315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.136341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.136505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.136719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.136745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.136922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.137124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.137150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.137332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.137501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.137526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.137688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.137850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.137876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.138046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.138205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.138231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.138431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.138622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.138654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.138839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.139001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.139027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.139185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.139339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.139364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.139559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.139715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.139741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.139910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.140070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.140096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.140257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.140423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.140449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.140634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.140794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.140820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.140977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.141130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.141155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.141331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.141492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.141517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.141676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.141869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.141895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.142072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.142232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.142258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.142416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.142601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.142627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.142790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.142963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.142991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.143156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.143309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.143335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.143497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.143649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.143674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.143835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.144009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.144035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.144200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.144387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.144413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.144585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.144742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.144768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.144937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.145122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.145147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.145345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.145525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.145553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.145717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.145892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.145918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.146123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.146284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.146309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.146469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.146626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.146653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.146806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.146973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.147000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.147153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.147306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.147332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.147493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.147685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.147711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.147861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.148016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.148042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.148218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.148425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.148450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.148631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.148815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.148840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.149022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.149177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.149203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.149359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.149506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.149531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.149719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.149883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.149908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.150120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.150275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.150300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.150459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.150625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.150651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.086 qpair failed and we were unable to recover it. 00:26:05.086 [2024-05-15 00:41:31.150816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.086 [2024-05-15 00:41:31.151001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.151027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.151210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.151375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.151400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.151578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.151735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.151760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.151961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.152113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.152139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.152321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.152512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.152539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.152699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.152858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.152884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.153056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.153239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.153264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.153471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.153647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.153676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.153833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.153979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.154005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.154171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.154328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.154356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.154535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.154685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.154710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.154866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.155024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.155050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.155218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.155372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.155397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.155606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.155760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.155785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.155996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.156153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.156179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.156345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.156500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.156525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.156682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.156831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.156857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.157027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.157188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.157217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.157390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.157545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.157572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.157766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.157923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.157963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.158121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.158318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.158343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.158530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.158718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.158743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.158899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.159054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.159080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.159286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.159468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.159494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.159652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.159836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.159861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.160022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.160185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.160211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.160400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.160583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.160608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.160801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.160974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.161000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.161165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.161326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.161352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.161508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.161666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.161693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.161867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.162048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.162074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.162264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.162451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.162476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.162630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.162805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.162830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.163044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.163210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.163237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.163396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.163559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.163586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.163746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.163936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.163963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.164122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.164307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.164332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.164493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.164679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.164705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.164863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.165038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.165065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.165233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.165387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.165412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.087 qpair failed and we were unable to recover it. 00:26:05.087 [2024-05-15 00:41:31.165588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.165754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.087 [2024-05-15 00:41:31.165780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.165953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.166113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.166140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.166302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.166457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.166482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.166661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.166845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.166871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.167052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.167204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.167229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.167394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.167542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.167567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.167746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.167937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.167963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.168144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.168330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.168358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.168540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.168724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.168749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.168937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.169097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.169123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.169287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.169453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.169478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.169642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.169801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.169826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.169995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.170153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.170182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.170345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.170530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.170555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.170739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.170917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.170950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.171111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.171272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.171299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.171484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.171640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.171665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.171817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.171981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.172008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.172215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.172369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.172399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.172588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.172745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.172772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.172974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.173130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.173155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.173334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.173490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.173516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.173685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.173864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.173890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.174137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.174293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.174319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.174476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.174631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.174656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.174848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.175012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.175040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.175201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.175353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.175378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.175543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.175715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.175740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.175892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.176056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.176082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.176283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.176470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.176495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.176679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.176841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.176867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.177024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.177210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.177238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.177399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.177585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.177610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.177771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.177926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.177967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.178129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.178321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.178347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.178507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.178659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.178685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.178848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.179011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.179038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.179215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.179366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.179391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.179550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.179708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.179734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.179901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.180070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.180096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.180279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.180456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.180481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.180672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.180860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.180885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.181063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.181246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.181272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.181430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.181583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.181609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.181821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.181982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.182008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.088 qpair failed and we were unable to recover it. 00:26:05.088 [2024-05-15 00:41:31.182172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.088 [2024-05-15 00:41:31.182357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.182382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.182566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.182731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.182757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.182938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.183097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.183126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.183291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.183448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.183474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.183627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.183793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.183818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.184007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.184168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.184193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.184375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.184595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.184622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.184816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.184988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.185015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.185173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.185364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.185389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.185555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.185717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.185742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.185941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.186129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.186155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.186325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.186487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.186513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.186668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.186852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.186879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.187041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.187206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.187232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.187397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.187579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.187604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.187784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.187965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.187992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.188174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.188330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.188355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.188549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.188726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.188751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.188950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.189106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.189132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.189286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.189439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.189465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.189655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.189814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.189840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.190007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.190157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.190183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.190397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.190555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.190581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.190739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.190895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.190922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.191102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.191276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.191306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.191462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.191652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.191678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.191854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.192016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.192043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.192200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.192355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.192381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.192546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.192710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.192737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.192934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.193094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.193119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.193275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.193466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.193491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.193658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.193817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.193845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.194034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.194214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.194239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.194391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.194554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.194581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.194770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.194924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.194956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.195125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.195299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.195324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.195503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.195663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.195689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.195869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.196035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.196062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.196252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.196438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.196464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.196622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.196809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.196835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.197029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.197207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.197233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.197397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.197583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.197609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.197770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.197990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.198016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.198206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.198386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.198411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.198581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.198739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.198765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.089 qpair failed and we were unable to recover it. 00:26:05.089 [2024-05-15 00:41:31.198926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.089 [2024-05-15 00:41:31.199117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.199143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.199299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.199458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.199484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.199644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.199805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.199830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.199990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.200151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.200176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.200368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.200521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.200547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.200735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.200915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.200947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.201144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.201302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.201327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.201487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.201669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.201695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.201884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.202048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.202074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.202250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.202414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.202440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.202600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.202795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.202821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.203009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.203194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.203220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.203404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.203574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.203600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.203773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.203927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.203957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.204115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.204287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.204313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.204493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.204646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.204672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.204848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.205033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.205059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.205224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.205401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.205427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.205639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.205814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.205840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.206044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.206197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.206223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.206380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.206548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.206578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.206769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.206926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.206956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.207115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.207303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.207330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.207507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.207683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.207709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.207881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.208063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.208089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.208253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.208465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.208492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.208648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.208831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.208856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.209040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.209195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.209220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.209379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.209562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.209588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.209774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.209941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.209967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.210152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.210353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.210385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.210549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.210709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.210734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.210897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.211076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.211103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.211285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.211472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.211497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.211691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.211849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.211879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.212046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.212201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.212226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.212405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.212592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.212616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.212778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.212936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.212962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.213113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.213308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.213333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.213492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.213700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.213726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.090 [2024-05-15 00:41:31.213904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.214079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.090 [2024-05-15 00:41:31.214105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.090 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.214272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.214448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.214473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.214648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.214804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.214831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.215019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.215175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.215200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.215358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.215547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.215573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.215753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.215909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.215940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.216107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.216290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.216315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.216498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.216661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.216686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.216842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.216999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.217028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.217201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.217356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.217384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.217571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.217723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.217748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.217904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.218079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.218107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.218261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.218466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.218491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.218647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.218794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.218822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.219012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.219167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.219192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.219351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.219515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.219542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.219728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.219880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.219905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.220107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.220268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.220293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.220459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.220656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.220681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.220862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.221024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.221051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.221211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.221389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.221414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.221573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.221736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.221762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.221939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.222110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.222136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.222292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.222473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.222498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.222656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.222832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.091 [2024-05-15 00:41:31.222858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.091 qpair failed and we were unable to recover it. 00:26:05.091 [2024-05-15 00:41:31.223040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.223203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.223229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.223417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.223601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.223627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.223820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.223989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.224018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.224188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.224414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.224440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.224634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.224782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.224808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.224971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.225146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.225171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.225359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.225522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.225552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.225713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.225876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.225901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.226068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.226258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.226284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.226501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.226657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.226684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.226870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.227035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.227061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.227217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.227406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.227433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.227597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.227754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.227779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.227936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.228102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.228129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.228318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.228503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.358 [2024-05-15 00:41:31.228528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.358 qpair failed and we were unable to recover it. 00:26:05.358 [2024-05-15 00:41:31.228685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.228862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.228887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.229053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.229214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.229241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.229398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.229550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.229579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.229744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.229904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.229934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.230112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.230261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.230286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.230457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.230645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.230671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.230822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.231009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.231035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.231193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.231380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.231406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.231566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.231723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.231748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.231909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.232089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.232115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.232294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.232473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.232498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.232704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.232863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.232889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.233076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.233239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.233265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.233421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.233606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.233631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.233818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.234015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.234041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.234202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.234355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.234381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.234543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.234728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.234755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.234917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.235087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.235118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.235288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.235478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.235505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.235672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.235859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.235884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.236071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.236253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.236279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.236446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.236605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.236630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.236796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.236994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.237022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.237181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.237331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.237356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.237503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.237657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.237682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.237845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.238060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.238088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.238288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.238475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.238500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.238661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.238875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.238900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.239064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.239220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.239245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.239423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.239628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.239654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.239815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.239976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.240002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.240185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.240364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.240389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.240570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.240741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.240766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.240935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.241100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.241127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.241318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.241484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.241509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.241699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.241889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.241914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.359 qpair failed and we were unable to recover it. 00:26:05.359 [2024-05-15 00:41:31.242099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.359 [2024-05-15 00:41:31.242282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.242307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.242485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.242653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.242679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.242832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.243010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.243037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.243211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.243401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.243427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.243600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.243783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.243809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.243961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.244128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.244153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.244320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.244493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.244524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.244733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.244896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.244922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.245092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.245252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.245278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.245472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.245632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.245657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.245814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.245991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.246017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.246187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.246335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.246360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.246520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.246672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.246699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.246865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.247045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.247071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.247255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.247441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.247466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.247667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.247844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.247869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.248033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.248221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.248246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.248413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.248598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.248624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.248776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.248939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.248964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.249125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.249293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.249318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.249474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.249688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.249713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.249868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.250045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.250071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.250227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.250429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.250455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.250604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.250758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.250784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.250944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.251111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.251135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.251309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.251472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.251499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.251658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.251916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.251947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.252143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.252298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.252323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.252489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.252674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.252699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.252861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.253024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.253049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.253234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.253400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.253425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.253577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.253738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.253763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.253960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.254141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.254166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.254332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.254490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.254515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.254672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.254855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.254880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.255041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.255233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.255258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.360 qpair failed and we were unable to recover it. 00:26:05.360 [2024-05-15 00:41:31.255438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.360 [2024-05-15 00:41:31.255609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.255635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.255791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.255962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.255988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.256214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.256373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.256398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.256576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.256780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.256807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.256984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.257158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.257184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.257345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.257501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.257526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.257691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.257848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.257876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.258053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.258222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.258248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.258406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.258563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.258589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.258775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.258947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.258974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.259163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.259383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.259409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.259563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.259747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.259779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.259937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.260130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.260155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.260344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.260496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.260522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.260674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.260826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.260852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.261063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.261222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.261248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.261404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.261563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.261588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.261770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.261940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.261967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.262130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.262314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.262340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.262512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.262699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.262724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.262905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.263077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.263105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.263262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.263423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.263453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.263620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.263803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.263829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.264024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.264217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.264242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.264426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.264586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.264612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.264800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.264965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.264990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.265159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.265316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.265341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.265504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.265654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.265681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.265849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.266036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.266062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.266232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.266387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.266413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.266579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.266742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.266768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.361 [2024-05-15 00:41:31.266940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.267104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.361 [2024-05-15 00:41:31.267130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.361 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.267329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.267489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.267514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.267665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.267851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.267876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.268033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.268220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.268246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.268434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.268592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.268617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.268790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.268954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.268981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.269238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.269451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.269476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.269653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.269806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.269831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.270011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.270167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.270192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.270378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.270540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.270566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.270723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:05.362 [2024-05-15 00:41:31.270910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.270943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.271109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:26:05.362 [2024-05-15 00:41:31.271263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.271291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.271454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:05.362 [2024-05-15 00:41:31.271642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.271672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:05.362 [2024-05-15 00:41:31.271866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.272032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.272060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.362 [2024-05-15 00:41:31.272226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.272385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.272411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.272570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.272758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.272783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.272966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.273131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.273157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.273318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.273479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.273514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.273671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.273850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.273875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.274072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.274234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.274260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.274466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.274634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.274660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.274815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.274971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.274998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.275160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.275337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.275363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.275551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.275720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.275746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.362 qpair failed and we were unable to recover it. 00:26:05.362 [2024-05-15 00:41:31.275905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.362 [2024-05-15 00:41:31.276073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.276099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.276251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.276413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.276438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.276606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.276814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.276839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.277029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.277208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.277234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.277391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.277577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.277603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.277796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.277954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.277980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.278150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.278335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.278360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.278568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.278744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.278769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.278928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.279108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.279133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.279307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.279463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.279491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.279671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.279836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.279862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.280050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.280227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.280253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.280431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.280623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.280648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.280835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.281019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.281046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.281210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.281386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.281411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.281565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.281781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.281808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.281994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.282161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.282187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.282341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.282506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.282533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.282710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.282902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.282927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.283104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.283300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.283325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.283507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.283688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.283714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.283903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.284077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.284103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.284270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.284447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.284473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.284641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.284798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.284824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.284989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.285152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.285178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.285333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.285530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.285556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.285712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.285893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.285923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.286111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.286274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.286299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.286477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.286646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.286672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.286858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.287026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.287052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.287233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.287387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.287412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.363 qpair failed and we were unable to recover it. 00:26:05.363 [2024-05-15 00:41:31.287600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.363 [2024-05-15 00:41:31.287760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.287785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.287977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.288141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.288167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.288343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.288522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.288547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.288721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.288909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.288942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.289122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.289305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.289330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.289549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.289702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.289728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.289897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.290089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.290114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.290265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.290474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.290499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.290661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.290835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.290861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.291035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.291206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.291232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.291388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.291559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.291586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.291761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.291955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.291983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.292138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.292308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.292335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.292525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.292742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.292768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.292926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.293143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.293169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.293351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.293541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.293567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.293779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.293933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.293960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.294165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.294327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.294353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.294538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.294698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.294723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.294898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.295065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.295092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.295242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.295418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.295443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.295627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.295789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.295815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.296012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.296195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.296220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.296379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.364 [2024-05-15 00:41:31.296549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.296576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.296792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:05.364 [2024-05-15 00:41:31.296967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.297002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.364 [2024-05-15 00:41:31.297186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.364 [2024-05-15 00:41:31.297351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.297377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.297542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.297738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.297764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.297923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.298120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.298146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.298331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.298522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.298547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.298707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.298866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.298891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.364 qpair failed and we were unable to recover it. 00:26:05.364 [2024-05-15 00:41:31.299075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.364 [2024-05-15 00:41:31.299261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.299287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.299474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.299650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.299675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.299862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.300062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.300094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.300249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.300434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.300460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.300793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.300951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.300977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.301198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.301357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.301382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.301534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.301724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.301750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.301906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.302083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.302110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.302299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.302448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.302473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.302621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.302800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.302825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.303009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.303215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.303240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.303401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.303587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.303613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.303810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.303998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.304024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.304180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.304336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.304361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.304515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.304679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.304704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.304898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.305104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.305130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.305297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.305483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.305508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.305666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.305847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.305872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.306056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.306232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.306257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.306419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.306612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.306637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.306822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.306986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.307012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.307174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.307356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.307381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.307594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.307894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.307919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.308116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.308305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.308331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.308520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.308704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.308729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.308888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.309095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.309126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.309287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.309438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.309463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.309788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.309956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.309992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.310169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.310316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.310341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.310522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.310686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.310712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.365 [2024-05-15 00:41:31.310864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.311037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.365 [2024-05-15 00:41:31.311063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.365 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.311221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.311415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.311440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.311634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.311823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.311848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.312036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.312202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.312228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.312414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.312597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.312622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.312876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.313193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.313223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.313378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.313553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.313578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.313729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.313908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.313939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.314116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.314282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.314309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.314489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.314703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.314729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.314917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.315094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.315119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.315284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.315474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.315499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.315688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.315866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.315891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.316118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.316307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.316333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.316511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.316695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.316721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.317063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.317265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.317291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.317464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.317656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.317681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.317836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.318020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.318046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.318216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.318404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.318429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.318622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.318816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.318841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.319030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.319191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.319216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.319374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.319587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.319612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.319774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.319966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.319995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.320173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.320342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.320369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.320527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.320717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.320742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.320942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.321102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.321127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.321294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.321477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.321502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.321661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.321872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.321898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.366 qpair failed and we were unable to recover it. 00:26:05.366 [2024-05-15 00:41:31.322097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.366 [2024-05-15 00:41:31.322261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.322287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.322478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.322664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.322689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.322863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 Malloc0 00:26:05.367 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.367 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:05.367 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.367 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.367 [2024-05-15 00:41:31.324077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.324109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.324285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.324484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.324510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.324674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.324829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.324854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.325009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.325171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.325196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.325393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.325570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.325595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.325790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.325980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.326005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.326202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.326387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.326413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.326597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.326755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.326780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.326890] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.367 [2024-05-15 00:41:31.326974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.327171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.327194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.327403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.327592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.327617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.327833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.328001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.328027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.328218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.328409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.328436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.328623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.328787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.328812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.329027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.329202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.329227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.329390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.329578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.329603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.329799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.329955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.329986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.330146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.330335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.330360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.330551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.330714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.330741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.330893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.331095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.331121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.331288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.331454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.331479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.331634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.331813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.331838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.332027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.332225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.332251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.332437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.332624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.332649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.332835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.333025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.333060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.333248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.333395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.333420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.333638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.333797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.333822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.333993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.334150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.334175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.334343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.334503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 [2024-05-15 00:41:31.334528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.367 qpair failed and we were unable to recover it. 00:26:05.367 [2024-05-15 00:41:31.334708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.367 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.368 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:05.368 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.368 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.368 [2024-05-15 00:41:31.335511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.335542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.335758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.335964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.335991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.336168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.336354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.336379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.336550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.336714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.336739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.336926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.337127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.337152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.337312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.337474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.337499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.337661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.337828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.337853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.338041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.338228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.338253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.338408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.338565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.338592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.338783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.338978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.339005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.339212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.339399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.339424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.339586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.339769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.339794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.339972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.340155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.340180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.340368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.340518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.340543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.340697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.340874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.340899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.341082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.341269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.341295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.341474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.341671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.341698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.341858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.342041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.342067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.342250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.342405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.342429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.342581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.342733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.342758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.368 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:05.368 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.368 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.368 [2024-05-15 00:41:31.343606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.343801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.343828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.344004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.344164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.344190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.344349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.344532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.344556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.344716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.344911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.344943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.345108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.345271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.345296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.345488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.345677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.345701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.345895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.346064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.346089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.346269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.346480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.346504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.346682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.346892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.346917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.347089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.347243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.368 [2024-05-15 00:41:31.347268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.368 qpair failed and we were unable to recover it. 00:26:05.368 [2024-05-15 00:41:31.347480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.347670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.347695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.347878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.348039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.348066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.348230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.348389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.348413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.348610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.348796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.348820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.349000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.349193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.349218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.349410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.349566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.349591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.349809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.349970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.349995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.350163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.350348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.350373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.350533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.350715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.350740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.369 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.369 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.369 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.369 [2024-05-15 00:41:31.351634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.351822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.351850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.352010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.352170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.352196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.352378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.352592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.352617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.352773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.352966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.352992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.353147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.353303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.353328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.353494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.353683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.353708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.353899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.354067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.354094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.354254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.354468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.354493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.354681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.354854] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:05.369 [2024-05-15 00:41:31.354875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.354899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b0420 with addr=10.0.0.2, port=4420 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.355094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.369 [2024-05-15 00:41:31.355173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.369 [2024-05-15 00:41:31.358224] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:26:05.369 [2024-05-15 00:41:31.358317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b0420 (107): Transport endpoint is not connected 00:26:05.369 [2024-05-15 00:41:31.358496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.369 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:05.369 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:05.369 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.369 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:05.369 00:41:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 994114 00:26:05.369 [2024-05-15 00:41:31.367696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.369 [2024-05-15 00:41:31.367898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.369 [2024-05-15 00:41:31.367926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.369 [2024-05-15 00:41:31.367955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.369 [2024-05-15 00:41:31.367968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.369 [2024-05-15 00:41:31.367996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.377563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.369 [2024-05-15 00:41:31.377721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.369 [2024-05-15 00:41:31.377747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.369 [2024-05-15 00:41:31.377762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.369 [2024-05-15 00:41:31.377774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.369 [2024-05-15 00:41:31.377802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.387575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.369 [2024-05-15 00:41:31.387739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.369 [2024-05-15 00:41:31.387765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.369 [2024-05-15 00:41:31.387779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.369 [2024-05-15 00:41:31.387791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.369 [2024-05-15 00:41:31.387819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.397539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.369 [2024-05-15 00:41:31.397700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.369 [2024-05-15 00:41:31.397726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.369 [2024-05-15 00:41:31.397741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.369 [2024-05-15 00:41:31.397753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.369 [2024-05-15 00:41:31.397781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.369 qpair failed and we were unable to recover it. 00:26:05.369 [2024-05-15 00:41:31.407594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.370 [2024-05-15 00:41:31.407759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.370 [2024-05-15 00:41:31.407786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.370 [2024-05-15 00:41:31.407801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.370 [2024-05-15 00:41:31.407812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.370 [2024-05-15 00:41:31.407841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.370 qpair failed and we were unable to recover it. 00:26:05.370 [2024-05-15 00:41:31.417611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.370 [2024-05-15 00:41:31.417772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.370 [2024-05-15 00:41:31.417798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.370 [2024-05-15 00:41:31.417818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.370 [2024-05-15 00:41:31.417831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.370 [2024-05-15 00:41:31.417859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.370 qpair failed and we were unable to recover it. 00:26:05.370 [2024-05-15 00:41:31.427598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.370 [2024-05-15 00:41:31.427765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.370 [2024-05-15 00:41:31.427790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.370 [2024-05-15 00:41:31.427804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.370 [2024-05-15 00:41:31.427816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.370 [2024-05-15 00:41:31.427844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.370 qpair failed and we were unable to recover it. 00:26:05.370 [2024-05-15 00:41:31.437721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.370 [2024-05-15 00:41:31.437893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.370 [2024-05-15 00:41:31.437919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.370 [2024-05-15 00:41:31.437941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.370 [2024-05-15 00:41:31.437954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.370 [2024-05-15 00:41:31.437983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.370 qpair failed and we were unable to recover it. 00:26:05.370 [2024-05-15 00:41:31.447711] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.370 [2024-05-15 00:41:31.447887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.370 [2024-05-15 00:41:31.447913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.370 [2024-05-15 00:41:31.447927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.370 [2024-05-15 00:41:31.447949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.370 [2024-05-15 00:41:31.447977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.370 qpair failed and we were unable to recover it. 00:26:05.370 [2024-05-15 00:41:31.457830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.370 [2024-05-15 00:41:31.458048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.370 [2024-05-15 00:41:31.458074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.370 [2024-05-15 00:41:31.458089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.370 [2024-05-15 00:41:31.458101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.370 [2024-05-15 00:41:31.458129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.370 qpair failed and we were unable to recover it. 00:26:05.370 [2024-05-15 00:41:31.467726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.370 [2024-05-15 00:41:31.467893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.370 [2024-05-15 00:41:31.467918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.370 [2024-05-15 00:41:31.467944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.370 [2024-05-15 00:41:31.467958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.370 [2024-05-15 00:41:31.467986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.370 qpair failed and we were unable to recover it. 00:26:05.370 [2024-05-15 00:41:31.477777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.370 [2024-05-15 00:41:31.477951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.370 [2024-05-15 00:41:31.477977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.370 [2024-05-15 00:41:31.477992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.370 [2024-05-15 00:41:31.478004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.370 [2024-05-15 00:41:31.478031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.370 qpair failed and we were unable to recover it. 00:26:05.370 [2024-05-15 00:41:31.487806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.370 [2024-05-15 00:41:31.487974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.370 [2024-05-15 00:41:31.488001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.370 [2024-05-15 00:41:31.488015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.370 [2024-05-15 00:41:31.488027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.370 [2024-05-15 00:41:31.488056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.370 qpair failed and we were unable to recover it. 00:26:05.370 [2024-05-15 00:41:31.498002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.370 [2024-05-15 00:41:31.498173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.370 [2024-05-15 00:41:31.498205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.370 [2024-05-15 00:41:31.498220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.370 [2024-05-15 00:41:31.498232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.370 [2024-05-15 00:41:31.498260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.370 qpair failed and we were unable to recover it. 00:26:05.370 [2024-05-15 00:41:31.507907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.370 [2024-05-15 00:41:31.508089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.370 [2024-05-15 00:41:31.508122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.370 [2024-05-15 00:41:31.508138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.370 [2024-05-15 00:41:31.508150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.370 [2024-05-15 00:41:31.508179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.370 qpair failed and we were unable to recover it. 00:26:05.629 [2024-05-15 00:41:31.518000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.629 [2024-05-15 00:41:31.518187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.629 [2024-05-15 00:41:31.518216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.629 [2024-05-15 00:41:31.518231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.518243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.518272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.527970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.528146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.528173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.528188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.528200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.528229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.537967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.538148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.538175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.538190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.538203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.538232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.547991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.548180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.548206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.548222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.548234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.548262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.558100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.558262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.558288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.558303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.558315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.558343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.568071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.568234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.568260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.568274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.568287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.568315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.578065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.578219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.578245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.578259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.578271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.578299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.588110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.588281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.588307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.588322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.588334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.588362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.598205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.598417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.598448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.598464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.598476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.598504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.608188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.608348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.608374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.608388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.608400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.608429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.618290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.618448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.618474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.618489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.618501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.618528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.628296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.628471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.628497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.628512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.628524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.628552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.638239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.638398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.638424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.638439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.638451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.638484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.630 [2024-05-15 00:41:31.648286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.630 [2024-05-15 00:41:31.648447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.630 [2024-05-15 00:41:31.648473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.630 [2024-05-15 00:41:31.648487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.630 [2024-05-15 00:41:31.648499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.630 [2024-05-15 00:41:31.648527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.630 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.658339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.658540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.658565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.658579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.658591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.658619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.668420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.668591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.668616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.668631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.668643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.668670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.678535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.678697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.678723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.678738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.678751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.678778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.688486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.688649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.688679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.688694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.688706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.688734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.698434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.698592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.698617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.698632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.698644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.698671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.708425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.708590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.708613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.708627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.708639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.708666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.718474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.718634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.718660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.718674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.718686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.718714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.728470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.728624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.728650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.728664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.728677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.728709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.738510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.738670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.738696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.738711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.738723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.738751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.748595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.748764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.748790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.748805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.748819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.748848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.758574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.758734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.758760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.758774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.758786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.758815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.768582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.768742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.631 [2024-05-15 00:41:31.768768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.631 [2024-05-15 00:41:31.768782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.631 [2024-05-15 00:41:31.768794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.631 [2024-05-15 00:41:31.768822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.631 qpair failed and we were unable to recover it. 00:26:05.631 [2024-05-15 00:41:31.778700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.631 [2024-05-15 00:41:31.778859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.632 [2024-05-15 00:41:31.778889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.632 [2024-05-15 00:41:31.778904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.632 [2024-05-15 00:41:31.778916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.632 [2024-05-15 00:41:31.778953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.632 qpair failed and we were unable to recover it. 00:26:05.632 [2024-05-15 00:41:31.788661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.632 [2024-05-15 00:41:31.788833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.632 [2024-05-15 00:41:31.788860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.632 [2024-05-15 00:41:31.788875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.632 [2024-05-15 00:41:31.788887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.632 [2024-05-15 00:41:31.788915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.632 qpair failed and we were unable to recover it. 00:26:05.891 [2024-05-15 00:41:31.798719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.891 [2024-05-15 00:41:31.798888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.891 [2024-05-15 00:41:31.798915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.891 [2024-05-15 00:41:31.798938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.891 [2024-05-15 00:41:31.798958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.891 [2024-05-15 00:41:31.798988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.891 qpair failed and we were unable to recover it. 00:26:05.891 [2024-05-15 00:41:31.808705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.891 [2024-05-15 00:41:31.808868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.891 [2024-05-15 00:41:31.808894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.891 [2024-05-15 00:41:31.808909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.891 [2024-05-15 00:41:31.808921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.891 [2024-05-15 00:41:31.808960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.891 qpair failed and we were unable to recover it. 00:26:05.891 [2024-05-15 00:41:31.818736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.891 [2024-05-15 00:41:31.818890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.891 [2024-05-15 00:41:31.818916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.891 [2024-05-15 00:41:31.818937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.891 [2024-05-15 00:41:31.818951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.891 [2024-05-15 00:41:31.818984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.891 qpair failed and we were unable to recover it. 00:26:05.891 [2024-05-15 00:41:31.828881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.891 [2024-05-15 00:41:31.829079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.891 [2024-05-15 00:41:31.829105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.891 [2024-05-15 00:41:31.829120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.891 [2024-05-15 00:41:31.829132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.891 [2024-05-15 00:41:31.829160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.891 qpair failed and we were unable to recover it. 00:26:05.891 [2024-05-15 00:41:31.838813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.891 [2024-05-15 00:41:31.838998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.891 [2024-05-15 00:41:31.839024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.891 [2024-05-15 00:41:31.839039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.891 [2024-05-15 00:41:31.839051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.891 [2024-05-15 00:41:31.839079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.891 qpair failed and we were unable to recover it. 00:26:05.891 [2024-05-15 00:41:31.848862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.891 [2024-05-15 00:41:31.849034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.891 [2024-05-15 00:41:31.849060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.891 [2024-05-15 00:41:31.849075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.891 [2024-05-15 00:41:31.849086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.891 [2024-05-15 00:41:31.849115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.891 qpair failed and we were unable to recover it. 00:26:05.891 [2024-05-15 00:41:31.858872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.891 [2024-05-15 00:41:31.859042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.891 [2024-05-15 00:41:31.859068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.891 [2024-05-15 00:41:31.859083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.891 [2024-05-15 00:41:31.859095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.891 [2024-05-15 00:41:31.859122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.891 qpair failed and we were unable to recover it. 00:26:05.891 [2024-05-15 00:41:31.868919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.891 [2024-05-15 00:41:31.869101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.891 [2024-05-15 00:41:31.869131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.891 [2024-05-15 00:41:31.869147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.891 [2024-05-15 00:41:31.869159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.891 [2024-05-15 00:41:31.869187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.891 qpair failed and we were unable to recover it. 00:26:05.891 [2024-05-15 00:41:31.878942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.879133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.879159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.879173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.879185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.879213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.888967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.889126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.889151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.889166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.889178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.889206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.898981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.899149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.899174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.899189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.899202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.899236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.909022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.909211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.909238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.909253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.909270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.909299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.919046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.919213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.919238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.919252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.919264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.919292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.929072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.929223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.929248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.929262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.929274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.929302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.939178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.939346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.939371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.939386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.939398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.939426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.949231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.949393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.949418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.949433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.949445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.949473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.959180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.959340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.959365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.959380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.959391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.959419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.969224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.969410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.969438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.969453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.969464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.969493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.979267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.979436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.979463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.979481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.979493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.979522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.989255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.989424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.989450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.989465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.989477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.989505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.892 qpair failed and we were unable to recover it. 00:26:05.892 [2024-05-15 00:41:31.999274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.892 [2024-05-15 00:41:31.999441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.892 [2024-05-15 00:41:31.999467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.892 [2024-05-15 00:41:31.999481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.892 [2024-05-15 00:41:31.999498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.892 [2024-05-15 00:41:31.999527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.893 qpair failed and we were unable to recover it. 00:26:05.893 [2024-05-15 00:41:32.009300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.893 [2024-05-15 00:41:32.009456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.893 [2024-05-15 00:41:32.009482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.893 [2024-05-15 00:41:32.009496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.893 [2024-05-15 00:41:32.009508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.893 [2024-05-15 00:41:32.009536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.893 qpair failed and we were unable to recover it. 00:26:05.893 [2024-05-15 00:41:32.019451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.893 [2024-05-15 00:41:32.019639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.893 [2024-05-15 00:41:32.019664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.893 [2024-05-15 00:41:32.019678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.893 [2024-05-15 00:41:32.019690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.893 [2024-05-15 00:41:32.019718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.893 qpair failed and we were unable to recover it. 00:26:05.893 [2024-05-15 00:41:32.029350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.893 [2024-05-15 00:41:32.029510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.893 [2024-05-15 00:41:32.029536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.893 [2024-05-15 00:41:32.029550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.893 [2024-05-15 00:41:32.029562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.893 [2024-05-15 00:41:32.029590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.893 qpair failed and we were unable to recover it. 00:26:05.893 [2024-05-15 00:41:32.039406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.893 [2024-05-15 00:41:32.039611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.893 [2024-05-15 00:41:32.039636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.893 [2024-05-15 00:41:32.039651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.893 [2024-05-15 00:41:32.039662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.893 [2024-05-15 00:41:32.039690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.893 qpair failed and we were unable to recover it. 00:26:05.893 [2024-05-15 00:41:32.049438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.893 [2024-05-15 00:41:32.049679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.893 [2024-05-15 00:41:32.049713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.893 [2024-05-15 00:41:32.049738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.893 [2024-05-15 00:41:32.049751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:05.893 [2024-05-15 00:41:32.049782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.893 qpair failed and we were unable to recover it. 00:26:06.150 [2024-05-15 00:41:32.059419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.150 [2024-05-15 00:41:32.059581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.150 [2024-05-15 00:41:32.059608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.150 [2024-05-15 00:41:32.059623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.150 [2024-05-15 00:41:32.059635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:06.150 [2024-05-15 00:41:32.059664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.150 qpair failed and we were unable to recover it. 00:26:06.150 [2024-05-15 00:41:32.069468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.150 [2024-05-15 00:41:32.069642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.150 [2024-05-15 00:41:32.069667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.150 [2024-05-15 00:41:32.069682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.150 [2024-05-15 00:41:32.069693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:06.150 [2024-05-15 00:41:32.069722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.150 qpair failed and we were unable to recover it. 00:26:06.150 [2024-05-15 00:41:32.079513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.150 [2024-05-15 00:41:32.079703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.150 [2024-05-15 00:41:32.079728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.150 [2024-05-15 00:41:32.079743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.150 [2024-05-15 00:41:32.079754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:06.150 [2024-05-15 00:41:32.079782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.150 qpair failed and we were unable to recover it. 00:26:06.150 [2024-05-15 00:41:32.089529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.150 [2024-05-15 00:41:32.089696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.150 [2024-05-15 00:41:32.089722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.150 [2024-05-15 00:41:32.089737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.150 [2024-05-15 00:41:32.089758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:06.150 [2024-05-15 00:41:32.089787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.150 qpair failed and we were unable to recover it. 00:26:06.150 [2024-05-15 00:41:32.099523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.151 [2024-05-15 00:41:32.099683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.151 [2024-05-15 00:41:32.099709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.151 [2024-05-15 00:41:32.099723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.151 [2024-05-15 00:41:32.099735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:06.151 [2024-05-15 00:41:32.099763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.151 qpair failed and we were unable to recover it. 00:26:06.151 [2024-05-15 00:41:32.109696] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.151 [2024-05-15 00:41:32.109865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.151 [2024-05-15 00:41:32.109891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.151 [2024-05-15 00:41:32.109909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.151 [2024-05-15 00:41:32.109920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:06.151 [2024-05-15 00:41:32.109956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.151 qpair failed and we were unable to recover it. 00:26:06.151 [2024-05-15 00:41:32.119591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.151 [2024-05-15 00:41:32.119753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.151 [2024-05-15 00:41:32.119778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.151 [2024-05-15 00:41:32.119793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.151 [2024-05-15 00:41:32.119804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:06.151 [2024-05-15 00:41:32.119832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.151 qpair failed and we were unable to recover it. 00:26:06.151 [2024-05-15 00:41:32.129636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.151 [2024-05-15 00:41:32.129793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.151 [2024-05-15 00:41:32.129818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.151 [2024-05-15 00:41:32.129832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.151 [2024-05-15 00:41:32.129844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x9b0420 00:26:06.151 [2024-05-15 00:41:32.129872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.151 qpair failed and we were unable to recover it. 00:26:06.151 [2024-05-15 00:41:32.129907] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:06.151 A controller has encountered a failure and is being reset. 00:26:06.151 Controller properly reset. 00:26:11.408 Initializing NVMe Controllers 00:26:11.408 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:11.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:11.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:11.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:11.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:11.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:11.408 Initialization complete. Launching workers. 00:26:11.408 Starting thread on core 1 00:26:11.408 Starting thread on core 2 00:26:11.408 Starting thread on core 3 00:26:11.408 Starting thread on core 0 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:11.408 00:26:11.408 real 0m11.422s 00:26:11.408 user 0m30.723s 00:26:11.408 sys 0m7.231s 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:11.408 ************************************ 00:26:11.408 END TEST nvmf_target_disconnect_tc2 00:26:11.408 ************************************ 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:11.408 rmmod nvme_tcp 00:26:11.408 rmmod nvme_fabrics 00:26:11.408 rmmod nvme_keyring 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 994523 ']' 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 994523 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' -z 994523 ']' 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # kill -0 994523 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # uname 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 994523 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_4 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_4 = sudo ']' 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 994523' 00:26:11.408 killing process with pid 994523 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # kill 994523 00:26:11.408 [2024-05-15 00:41:36.642108] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # wait 994523 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.408 00:41:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.312 00:41:38 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:13.312 00:26:13.312 real 0m16.607s 00:26:13.312 user 0m56.064s 00:26:13.312 sys 0m10.076s 00:26:13.312 00:41:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:13.312 00:41:38 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:13.312 ************************************ 00:26:13.312 END TEST nvmf_target_disconnect 00:26:13.312 ************************************ 00:26:13.312 00:41:39 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:26:13.312 00:41:39 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:13.312 00:41:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.312 00:41:39 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:13.312 00:26:13.312 real 19m53.478s 00:26:13.312 user 46m42.199s 00:26:13.312 sys 5m12.400s 00:26:13.312 00:41:39 nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:13.312 00:41:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.312 ************************************ 00:26:13.312 END TEST nvmf_tcp 00:26:13.312 ************************************ 00:26:13.312 00:41:39 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:26:13.312 00:41:39 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:13.312 00:41:39 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:13.312 00:41:39 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:13.312 00:41:39 -- common/autotest_common.sh@10 -- # set +x 00:26:13.312 ************************************ 00:26:13.312 START TEST spdkcli_nvmf_tcp 00:26:13.312 ************************************ 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:13.312 * Looking for test storage... 00:26:13.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.312 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=995722 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 995722 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # '[' -z 995722 ']' 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:13.313 00:41:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.313 [2024-05-15 00:41:39.219360] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:26:13.313 [2024-05-15 00:41:39.219437] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995722 ] 00:26:13.313 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.313 [2024-05-15 00:41:39.285593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:13.313 [2024-05-15 00:41:39.396681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.313 [2024-05-15 00:41:39.396687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.246 00:41:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:14.246 00:41:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@861 -- # return 0 00:26:14.246 00:41:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:14.246 00:41:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:14.246 00:41:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:14.246 00:41:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:14.246 00:41:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:14.246 00:41:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:14.246 00:41:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:14.246 00:41:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:14.246 00:41:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:14.246 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:14.246 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:14.246 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:14.246 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:14.246 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:14.246 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:14.246 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:14.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:14.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:14.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:14.246 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:14.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:14.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:14.246 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:14.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:14.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:14.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:14.246 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:14.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:14.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:14.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:14.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:14.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:14.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:14.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:14.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:14.247 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:14.247 ' 00:26:16.776 [2024-05-15 00:41:42.745923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.149 [2024-05-15 00:41:43.977768] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:18.149 [2024-05-15 00:41:43.978420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:20.674 [2024-05-15 00:41:46.241406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:22.045 [2024-05-15 00:41:48.187584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:23.941 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:23.941 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:23.941 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:23.941 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:23.941 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:23.941 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:23.941 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:23.941 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:23.941 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:23.941 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:23.941 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:23.941 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:23.941 00:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:23.941 00:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:23.941 00:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:23.941 00:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:23.941 00:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:23.941 00:41:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:23.941 00:41:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:26:23.941 00:41:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:24.199 00:41:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:24.199 00:41:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:24.199 00:41:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:24.199 00:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:24.199 00:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:24.199 00:41:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:24.199 00:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:24.199 00:41:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:24.199 00:41:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:24.199 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:24.199 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:24.199 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:24.199 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:24.199 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:24.199 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:24.199 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:24.199 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:24.199 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:24.199 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:24.199 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:24.199 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:24.199 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:24.199 ' 00:26:29.456 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:29.456 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:29.456 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:29.456 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:29.456 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:29.456 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:29.456 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:29.456 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:29.456 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:29.456 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:29.456 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:29.456 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:29.456 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:29.456 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 995722 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 995722 ']' 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 995722 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # uname 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 995722 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 995722' 00:26:29.456 killing process with pid 995722 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # kill 995722 00:26:29.456 [2024-05-15 00:41:55.554182] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:29.456 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # wait 995722 00:26:29.714 00:41:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:29.714 00:41:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:29.714 00:41:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 995722 ']' 00:26:29.714 00:41:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 995722 00:26:29.714 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 995722 ']' 00:26:29.714 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 995722 00:26:29.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (995722) - No such process 00:26:29.714 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # echo 'Process with pid 995722 is not found' 00:26:29.714 Process with pid 995722 is not found 00:26:29.714 00:41:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:29.714 00:41:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:29.714 00:41:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:29.714 00:26:29.714 real 0m16.728s 00:26:29.714 user 0m35.313s 00:26:29.715 sys 0m0.873s 00:26:29.715 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:29.715 00:41:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:29.715 ************************************ 00:26:29.715 END TEST spdkcli_nvmf_tcp 00:26:29.715 ************************************ 00:26:29.715 00:41:55 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:29.715 00:41:55 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:29.715 00:41:55 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:29.715 00:41:55 -- common/autotest_common.sh@10 -- # set +x 00:26:29.973 ************************************ 00:26:29.973 START TEST nvmf_identify_passthru 00:26:29.973 ************************************ 00:26:29.973 00:41:55 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:29.973 * Looking for test storage... 00:26:29.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:29.973 00:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.973 00:41:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.973 00:41:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.973 00:41:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.973 00:41:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.973 00:41:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.973 00:41:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.973 00:41:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:29.973 00:41:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:29.973 00:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.973 00:41:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.973 00:41:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.973 00:41:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.973 00:41:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.973 00:41:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.973 00:41:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.973 00:41:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:29.973 00:41:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.973 00:41:55 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.973 00:41:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:29.973 00:41:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:29.973 00:41:55 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:29.973 00:41:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.502 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:32.503 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:32.503 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:32.503 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:32.503 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:32.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:26:32.503 00:26:32.503 --- 10.0.0.2 ping statistics --- 00:26:32.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.503 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:26:32.503 00:26:32.503 --- 10.0.0.1 ping statistics --- 00:26:32.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.503 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:32.503 00:41:58 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:32.503 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:32.503 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=() 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # local bdfs 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=($(get_nvme_bdfs)) 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # get_nvme_bdfs 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=() 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # local bdfs 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:26:32.503 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:26:32.762 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:88:00.0 00:26:32.762 00:41:58 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # echo 0000:88:00.0 00:26:32.762 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:26:32.762 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:26:32.762 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:26:32.762 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:32.762 00:41:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:32.762 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.945 00:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:26:36.945 00:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:26:36.945 00:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:36.945 00:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:36.945 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.153 00:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:26:41.153 00:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:41.153 00:42:07 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:41.153 00:42:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:41.153 00:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:41.153 00:42:07 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:41.153 00:42:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:41.153 00:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1000890 00:26:41.153 00:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:41.153 00:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.153 00:42:07 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1000890 00:26:41.153 00:42:07 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # '[' -z 1000890 ']' 00:26:41.153 00:42:07 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.153 00:42:07 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:41.153 00:42:07 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.153 00:42:07 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:41.153 00:42:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:41.153 [2024-05-15 00:42:07.155576] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:26:41.153 [2024-05-15 00:42:07.155668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.153 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.153 [2024-05-15 00:42:07.234546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.410 [2024-05-15 00:42:07.352462] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.411 [2024-05-15 00:42:07.352525] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.411 [2024-05-15 00:42:07.352539] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.411 [2024-05-15 00:42:07.352550] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.411 [2024-05-15 00:42:07.352574] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.411 [2024-05-15 00:42:07.355952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.411 [2024-05-15 00:42:07.356016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.411 [2024-05-15 00:42:07.356087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.411 [2024-05-15 00:42:07.356083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@861 -- # return 0 00:26:42.342 00:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:42.342 INFO: Log level set to 20 00:26:42.342 INFO: Requests: 00:26:42.342 { 00:26:42.342 "jsonrpc": "2.0", 00:26:42.342 "method": "nvmf_set_config", 00:26:42.342 "id": 1, 00:26:42.342 "params": { 00:26:42.342 "admin_cmd_passthru": { 00:26:42.342 "identify_ctrlr": true 00:26:42.342 } 00:26:42.342 } 00:26:42.342 } 00:26:42.342 00:26:42.342 INFO: response: 00:26:42.342 { 00:26:42.342 "jsonrpc": "2.0", 00:26:42.342 "id": 1, 00:26:42.342 "result": true 00:26:42.342 } 00:26:42.342 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.342 00:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:42.342 INFO: Setting log level to 20 00:26:42.342 INFO: Setting log level to 20 00:26:42.342 INFO: Log level set to 20 00:26:42.342 INFO: Log level set to 20 00:26:42.342 INFO: Requests: 00:26:42.342 { 00:26:42.342 "jsonrpc": "2.0", 00:26:42.342 "method": "framework_start_init", 00:26:42.342 "id": 1 00:26:42.342 } 00:26:42.342 00:26:42.342 INFO: Requests: 00:26:42.342 { 00:26:42.342 "jsonrpc": "2.0", 00:26:42.342 "method": "framework_start_init", 00:26:42.342 "id": 1 00:26:42.342 } 00:26:42.342 00:26:42.342 [2024-05-15 00:42:08.258326] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:42.342 INFO: response: 00:26:42.342 { 00:26:42.342 "jsonrpc": "2.0", 00:26:42.342 "id": 1, 00:26:42.342 "result": true 00:26:42.342 } 00:26:42.342 00:26:42.342 INFO: response: 00:26:42.342 { 00:26:42.342 "jsonrpc": "2.0", 00:26:42.342 "id": 1, 00:26:42.342 "result": true 00:26:42.342 } 00:26:42.342 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.342 00:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:42.342 INFO: Setting log level to 40 00:26:42.342 INFO: Setting log level to 40 00:26:42.342 INFO: Setting log level to 40 00:26:42.342 [2024-05-15 00:42:08.268437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.342 00:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:42.342 00:42:08 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.342 00:42:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:45.618 Nvme0n1 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:45.618 [2024-05-15 00:42:11.174197] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:45.618 [2024-05-15 00:42:11.174511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:45.618 [ 00:26:45.618 { 00:26:45.618 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:45.618 "subtype": "Discovery", 00:26:45.618 "listen_addresses": [], 00:26:45.618 "allow_any_host": true, 00:26:45.618 "hosts": [] 00:26:45.618 }, 00:26:45.618 { 00:26:45.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:45.618 "subtype": "NVMe", 00:26:45.618 "listen_addresses": [ 00:26:45.618 { 00:26:45.618 "trtype": "TCP", 00:26:45.618 "adrfam": "IPv4", 00:26:45.618 "traddr": "10.0.0.2", 00:26:45.618 "trsvcid": "4420" 00:26:45.618 } 00:26:45.618 ], 00:26:45.618 "allow_any_host": true, 00:26:45.618 "hosts": [], 00:26:45.618 "serial_number": "SPDK00000000000001", 00:26:45.618 "model_number": "SPDK bdev Controller", 00:26:45.618 "max_namespaces": 1, 00:26:45.618 "min_cntlid": 1, 00:26:45.618 "max_cntlid": 65519, 00:26:45.618 "namespaces": [ 00:26:45.618 { 00:26:45.618 "nsid": 1, 00:26:45.618 "bdev_name": "Nvme0n1", 00:26:45.618 "name": "Nvme0n1", 00:26:45.618 "nguid": "DAB5275FD398413EA479A7210824FE84", 00:26:45.618 "uuid": "dab5275f-d398-413e-a479-a7210824fe84" 00:26:45.618 } 00:26:45.618 ] 00:26:45.618 } 00:26:45.618 ] 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:45.618 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:45.618 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:45.618 00:42:11 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:45.618 00:42:11 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.618 00:42:11 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:26:45.618 00:42:11 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.618 00:42:11 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:26:45.618 00:42:11 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.618 00:42:11 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.618 rmmod nvme_tcp 00:26:45.618 rmmod nvme_fabrics 00:26:45.618 rmmod nvme_keyring 00:26:45.618 00:42:11 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.618 00:42:11 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:26:45.618 00:42:11 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:26:45.618 00:42:11 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1000890 ']' 00:26:45.618 00:42:11 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1000890 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' -z 1000890 ']' 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # kill -0 1000890 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # uname 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1000890 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1000890' 00:26:45.618 killing process with pid 1000890 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # kill 1000890 00:26:45.618 [2024-05-15 00:42:11.519761] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:45.618 00:42:11 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # wait 1000890 00:26:46.991 00:42:13 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.991 00:42:13 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.991 00:42:13 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.991 00:42:13 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.991 00:42:13 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.991 00:42:13 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.991 00:42:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:46.991 00:42:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.523 00:42:15 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.523 00:26:49.523 real 0m19.289s 00:26:49.523 user 0m29.738s 00:26:49.523 sys 0m2.783s 00:26:49.523 00:42:15 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:49.523 00:42:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:49.523 ************************************ 00:26:49.523 END TEST nvmf_identify_passthru 00:26:49.523 ************************************ 00:26:49.523 00:42:15 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:49.523 00:42:15 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:26:49.523 00:42:15 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:49.523 00:42:15 -- common/autotest_common.sh@10 -- # set +x 00:26:49.523 ************************************ 00:26:49.523 START TEST nvmf_dif 00:26:49.523 ************************************ 00:26:49.523 00:42:15 nvmf_dif -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:49.523 * Looking for test storage... 00:26:49.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.523 00:42:15 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.523 00:42:15 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.523 00:42:15 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.523 00:42:15 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.523 00:42:15 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.523 00:42:15 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.523 00:42:15 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.523 00:42:15 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.523 00:42:15 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:49.524 00:42:15 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.524 00:42:15 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:49.524 00:42:15 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:49.524 00:42:15 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:49.524 00:42:15 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:49.524 00:42:15 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.524 00:42:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:49.524 00:42:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:49.524 00:42:15 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.524 00:42:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:52.053 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.053 00:42:17 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:52.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:52.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:52.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:52.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:26:52.054 00:26:52.054 --- 10.0.0.2 ping statistics --- 00:26:52.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.054 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:26:52.054 00:26:52.054 --- 10.0.0.1 ping statistics --- 00:26:52.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.054 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:52.054 00:42:17 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:53.428 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:53.428 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:53.428 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:53.428 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:53.428 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:53.428 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:53.428 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:53.428 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:53.428 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:53.428 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:53.428 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:53.428 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:53.428 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:53.428 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:53.428 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:53.428 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:53.428 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:53.428 00:42:19 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.428 00:42:19 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.428 00:42:19 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.428 00:42:19 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.428 00:42:19 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.428 00:42:19 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.428 00:42:19 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:53.428 00:42:19 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:53.428 00:42:19 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:53.428 00:42:19 nvmf_dif -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:53.428 00:42:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:53.428 00:42:19 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1005125 00:26:53.428 00:42:19 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:53.428 00:42:19 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1005125 00:26:53.428 00:42:19 nvmf_dif -- common/autotest_common.sh@828 -- # '[' -z 1005125 ']' 00:26:53.428 00:42:19 nvmf_dif -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.428 00:42:19 nvmf_dif -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:53.428 00:42:19 nvmf_dif -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.428 00:42:19 nvmf_dif -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:53.428 00:42:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:53.428 [2024-05-15 00:42:19.390702] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:26:53.428 [2024-05-15 00:42:19.390787] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.428 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.428 [2024-05-15 00:42:19.476672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.686 [2024-05-15 00:42:19.610319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.686 [2024-05-15 00:42:19.610375] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.686 [2024-05-15 00:42:19.610412] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.686 [2024-05-15 00:42:19.610429] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.686 [2024-05-15 00:42:19.610458] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.686 [2024-05-15 00:42:19.610491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.252 00:42:20 nvmf_dif -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:54.252 00:42:20 nvmf_dif -- common/autotest_common.sh@861 -- # return 0 00:26:54.253 00:42:20 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:54.253 00:42:20 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:54.253 00:42:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:54.253 00:42:20 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.253 00:42:20 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:54.253 00:42:20 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:54.253 00:42:20 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.253 00:42:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:54.253 [2024-05-15 00:42:20.404963] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.253 00:42:20 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.253 00:42:20 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:54.253 00:42:20 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:26:54.253 00:42:20 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:54.253 00:42:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:54.512 ************************************ 00:26:54.512 START TEST fio_dif_1_default 00:26:54.512 ************************************ 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # fio_dif_1 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:54.512 bdev_null0 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:54.512 [2024-05-15 00:42:20.465045] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:54.512 [2024-05-15 00:42:20.465295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.512 { 00:26:54.512 "params": { 00:26:54.512 "name": "Nvme$subsystem", 00:26:54.512 "trtype": "$TEST_TRANSPORT", 00:26:54.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.512 "adrfam": "ipv4", 00:26:54.512 "trsvcid": "$NVMF_PORT", 00:26:54.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.512 "hdgst": ${hdgst:-false}, 00:26:54.512 "ddgst": ${ddgst:-false} 00:26:54.512 }, 00:26:54.512 "method": "bdev_nvme_attach_controller" 00:26:54.512 } 00:26:54.512 EOF 00:26:54.512 )") 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local sanitizers 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # shift 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local asan_lib= 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libasan 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:54.512 "params": { 00:26:54.512 "name": "Nvme0", 00:26:54.512 "trtype": "tcp", 00:26:54.512 "traddr": "10.0.0.2", 00:26:54.512 "adrfam": "ipv4", 00:26:54.512 "trsvcid": "4420", 00:26:54.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:54.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:54.512 "hdgst": false, 00:26:54.512 "ddgst": false 00:26:54.512 }, 00:26:54.512 "method": "bdev_nvme_attach_controller" 00:26:54.512 }' 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:54.512 00:42:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:54.770 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:54.770 fio-3.35 00:26:54.770 Starting 1 thread 00:26:54.770 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.995 00:27:06.995 filename0: (groupid=0, jobs=1): err= 0: pid=1005415: Wed May 15 00:42:31 2024 00:27:06.995 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10031msec) 00:27:06.995 slat (nsec): min=4444, max=47322, avg=10740.40, stdev=4927.92 00:27:06.995 clat (usec): min=40881, max=45628, avg=41761.81, stdev=498.26 00:27:06.995 lat (usec): min=40889, max=45642, avg=41772.55, stdev=498.04 00:27:06.995 clat percentiles (usec): 00:27:06.995 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:06.995 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:27:06.995 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:06.995 | 99.00th=[42730], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:27:06.995 | 99.99th=[45876] 00:27:06.995 bw ( KiB/s): min= 352, max= 384, per=99.79%, avg=382.40, stdev= 7.16, samples=20 00:27:06.995 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:27:06.995 lat (msec) : 50=100.00% 00:27:06.995 cpu : usr=89.43%, sys=10.23%, ctx=10, majf=0, minf=235 00:27:06.995 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:06.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.995 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.995 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:06.995 00:27:06.995 Run status group 0 (all jobs): 00:27:06.995 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10031-10031msec 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.995 00:27:06.995 real 0m11.054s 00:27:06.995 user 0m10.070s 00:27:06.995 sys 0m1.299s 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:06.995 00:42:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:06.995 ************************************ 00:27:06.995 END TEST fio_dif_1_default 00:27:06.995 ************************************ 00:27:06.995 00:42:31 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:06.995 00:42:31 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:06.995 00:42:31 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:06.995 00:42:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:06.995 ************************************ 00:27:06.996 START TEST fio_dif_1_multi_subsystems 00:27:06.996 ************************************ 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # fio_dif_1_multi_subsystems 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:06.996 bdev_null0 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:06.996 [2024-05-15 00:42:31.568413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:06.996 bdev_null1 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.996 { 00:27:06.996 "params": { 00:27:06.996 "name": "Nvme$subsystem", 00:27:06.996 "trtype": "$TEST_TRANSPORT", 00:27:06.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.996 "adrfam": "ipv4", 00:27:06.996 "trsvcid": "$NVMF_PORT", 00:27:06.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.996 "hdgst": ${hdgst:-false}, 00:27:06.996 "ddgst": ${ddgst:-false} 00:27:06.996 }, 00:27:06.996 "method": "bdev_nvme_attach_controller" 00:27:06.996 } 00:27:06.996 EOF 00:27:06.996 )") 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local sanitizers 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # shift 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local asan_lib= 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libasan 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.996 { 00:27:06.996 "params": { 00:27:06.996 "name": "Nvme$subsystem", 00:27:06.996 "trtype": "$TEST_TRANSPORT", 00:27:06.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.996 "adrfam": "ipv4", 00:27:06.996 "trsvcid": "$NVMF_PORT", 00:27:06.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.996 "hdgst": ${hdgst:-false}, 00:27:06.996 "ddgst": ${ddgst:-false} 00:27:06.996 }, 00:27:06.996 "method": "bdev_nvme_attach_controller" 00:27:06.996 } 00:27:06.996 EOF 00:27:06.996 )") 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:06.996 "params": { 00:27:06.996 "name": "Nvme0", 00:27:06.996 "trtype": "tcp", 00:27:06.996 "traddr": "10.0.0.2", 00:27:06.996 "adrfam": "ipv4", 00:27:06.996 "trsvcid": "4420", 00:27:06.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:06.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:06.996 "hdgst": false, 00:27:06.996 "ddgst": false 00:27:06.996 }, 00:27:06.996 "method": "bdev_nvme_attach_controller" 00:27:06.996 },{ 00:27:06.996 "params": { 00:27:06.996 "name": "Nvme1", 00:27:06.996 "trtype": "tcp", 00:27:06.996 "traddr": "10.0.0.2", 00:27:06.996 "adrfam": "ipv4", 00:27:06.996 "trsvcid": "4420", 00:27:06.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:06.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:06.996 "hdgst": false, 00:27:06.996 "ddgst": false 00:27:06.996 }, 00:27:06.996 "method": "bdev_nvme_attach_controller" 00:27:06.996 }' 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:06.996 00:42:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:06.997 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:06.997 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:06.997 fio-3.35 00:27:06.997 Starting 2 threads 00:27:06.997 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.954 00:27:16.954 filename0: (groupid=0, jobs=1): err= 0: pid=1006819: Wed May 15 00:42:42 2024 00:27:16.954 read: IOPS=183, BW=735KiB/s (753kB/s)(7360KiB/10011msec) 00:27:16.954 slat (nsec): min=7511, max=28717, avg=11126.43, stdev=3726.16 00:27:16.954 clat (usec): min=922, max=42639, avg=21728.58, stdev=20252.54 00:27:16.954 lat (usec): min=930, max=42661, avg=21739.70, stdev=20251.34 00:27:16.954 clat percentiles (usec): 00:27:16.954 | 1.00th=[ 1057], 5.00th=[ 1123], 10.00th=[ 1139], 20.00th=[ 1172], 00:27:16.954 | 30.00th=[ 1254], 40.00th=[ 1287], 50.00th=[41157], 60.00th=[41681], 00:27:16.954 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:27:16.954 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:16.954 | 99.99th=[42730] 00:27:16.954 bw ( KiB/s): min= 672, max= 768, per=49.81%, avg=734.40, stdev=35.17, samples=20 00:27:16.954 iops : min= 168, max= 192, avg=183.60, stdev= 8.79, samples=20 00:27:16.954 lat (usec) : 1000=0.71% 00:27:16.954 lat (msec) : 2=48.64%, 50=50.65% 00:27:16.954 cpu : usr=94.27%, sys=5.46%, ctx=13, majf=0, minf=44 00:27:16.954 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:16.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.954 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.954 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:16.954 filename1: (groupid=0, jobs=1): err= 0: pid=1006820: Wed May 15 00:42:42 2024 00:27:16.954 read: IOPS=184, BW=738KiB/s (756kB/s)(7392KiB/10010msec) 00:27:16.954 slat (nsec): min=7947, max=67128, avg=11119.29, stdev=3955.50 00:27:16.954 clat (usec): min=906, max=42660, avg=21632.25, stdev=20433.56 00:27:16.954 lat (usec): min=917, max=42692, avg=21643.37, stdev=20434.56 00:27:16.954 clat percentiles (usec): 00:27:16.954 | 1.00th=[ 955], 5.00th=[ 979], 10.00th=[ 988], 20.00th=[ 1012], 00:27:16.954 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[41157], 60.00th=[41681], 00:27:16.954 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:27:16.954 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:16.954 | 99.99th=[42730] 00:27:16.954 bw ( KiB/s): min= 672, max= 768, per=50.01%, avg=737.60, stdev=35.17, samples=20 00:27:16.954 iops : min= 168, max= 192, avg=184.40, stdev= 8.79, samples=20 00:27:16.954 lat (usec) : 1000=15.53% 00:27:16.954 lat (msec) : 2=34.04%, 50=50.43% 00:27:16.954 cpu : usr=94.18%, sys=5.46%, ctx=61, majf=0, minf=191 00:27:16.954 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:16.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.954 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.954 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:16.954 00:27:16.954 Run status group 0 (all jobs): 00:27:16.954 READ: bw=1474KiB/s (1509kB/s), 735KiB/s-738KiB/s (753kB/s-756kB/s), io=14.4MiB (15.1MB), run=10010-10011msec 00:27:16.954 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:16.954 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:16.954 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:16.954 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:16.954 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:16.954 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.211 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:17.212 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.212 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:17.212 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.212 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:17.212 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.212 00:27:17.212 real 0m11.611s 00:27:17.212 user 0m20.511s 00:27:17.212 sys 0m1.413s 00:27:17.212 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:17.212 00:42:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:17.212 ************************************ 00:27:17.212 END TEST fio_dif_1_multi_subsystems 00:27:17.212 ************************************ 00:27:17.212 00:42:43 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:17.212 00:42:43 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:17.212 00:42:43 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:17.212 00:42:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:17.212 ************************************ 00:27:17.212 START TEST fio_dif_rand_params 00:27:17.212 ************************************ 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # fio_dif_rand_params 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.212 bdev_null0 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:17.212 [2024-05-15 00:42:43.233136] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:17.212 { 00:27:17.212 "params": { 00:27:17.212 "name": "Nvme$subsystem", 00:27:17.212 "trtype": "$TEST_TRANSPORT", 00:27:17.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.212 "adrfam": "ipv4", 00:27:17.212 "trsvcid": "$NVMF_PORT", 00:27:17.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.212 "hdgst": ${hdgst:-false}, 00:27:17.212 "ddgst": ${ddgst:-false} 00:27:17.212 }, 00:27:17.212 "method": "bdev_nvme_attach_controller" 00:27:17.212 } 00:27:17.212 EOF 00:27:17.212 )") 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:17.212 "params": { 00:27:17.212 "name": "Nvme0", 00:27:17.212 "trtype": "tcp", 00:27:17.212 "traddr": "10.0.0.2", 00:27:17.212 "adrfam": "ipv4", 00:27:17.212 "trsvcid": "4420", 00:27:17.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:17.212 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:17.212 "hdgst": false, 00:27:17.212 "ddgst": false 00:27:17.212 }, 00:27:17.212 "method": "bdev_nvme_attach_controller" 00:27:17.212 }' 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:17.212 00:42:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:17.475 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:17.475 ... 00:27:17.475 fio-3.35 00:27:17.475 Starting 3 threads 00:27:17.475 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.036 00:27:24.036 filename0: (groupid=0, jobs=1): err= 0: pid=1008218: Wed May 15 00:42:49 2024 00:27:24.036 read: IOPS=158, BW=19.8MiB/s (20.8MB/s)(99.1MiB/5005msec) 00:27:24.036 slat (nsec): min=5883, max=36341, avg=13075.40, stdev=4120.39 00:27:24.036 clat (usec): min=5077, max=57683, avg=18911.88, stdev=16052.42 00:27:24.036 lat (usec): min=5089, max=57696, avg=18924.96, stdev=16052.37 00:27:24.036 clat percentiles (usec): 00:27:24.036 | 1.00th=[ 6980], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9765], 00:27:24.036 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11863], 60.00th=[12780], 00:27:24.036 | 70.00th=[13960], 80.00th=[16188], 90.00th=[52691], 95.00th=[53740], 00:27:24.036 | 99.00th=[56361], 99.50th=[56886], 99.90th=[57934], 99.95th=[57934], 00:27:24.036 | 99.99th=[57934] 00:27:24.036 bw ( KiB/s): min=12288, max=26164, per=27.67%, avg=20229.20, stdev=3917.33, samples=10 00:27:24.036 iops : min= 96, max= 204, avg=158.00, stdev=30.54, samples=10 00:27:24.036 lat (msec) : 10=24.46%, 20=57.25%, 50=2.02%, 100=16.27% 00:27:24.036 cpu : usr=90.73%, sys=8.73%, ctx=7, majf=0, minf=81 00:27:24.036 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:24.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.036 issued rwts: total=793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.036 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:24.036 filename0: (groupid=0, jobs=1): err= 0: pid=1008220: Wed May 15 00:42:49 2024 00:27:24.036 read: IOPS=185, BW=23.2MiB/s (24.3MB/s)(117MiB/5043msec) 00:27:24.036 slat (nsec): min=5657, max=55342, avg=13072.18, stdev=4318.31 00:27:24.036 clat (usec): min=6159, max=93954, avg=16176.04, stdev=14381.70 00:27:24.036 lat (usec): min=6171, max=93967, avg=16189.11, stdev=14381.91 00:27:24.036 clat percentiles (usec): 00:27:24.036 | 1.00th=[ 6652], 5.00th=[ 7308], 10.00th=[ 8225], 20.00th=[ 9241], 00:27:24.036 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10945], 60.00th=[11994], 00:27:24.036 | 70.00th=[12911], 80.00th=[14222], 90.00th=[51119], 95.00th=[52691], 00:27:24.036 | 99.00th=[54789], 99.50th=[55313], 99.90th=[93848], 99.95th=[93848], 00:27:24.036 | 99.99th=[93848] 00:27:24.036 bw ( KiB/s): min=17664, max=33280, per=32.60%, avg=23833.60, stdev=4485.40, samples=10 00:27:24.036 iops : min= 138, max= 260, avg=186.20, stdev=35.04, samples=10 00:27:24.036 lat (msec) : 10=33.94%, 20=53.53%, 50=0.43%, 100=12.10% 00:27:24.036 cpu : usr=91.07%, sys=8.41%, ctx=9, majf=0, minf=118 00:27:24.036 IO depths : 1=2.2%, 2=97.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:24.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.036 issued rwts: total=934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.036 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:24.036 filename0: (groupid=0, jobs=1): err= 0: pid=1008221: Wed May 15 00:42:49 2024 00:27:24.036 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(144MiB/5046msec) 00:27:24.036 slat (nsec): min=5826, max=48680, avg=13086.60, stdev=3381.99 00:27:24.036 clat (usec): min=5977, max=94683, avg=13052.49, stdev=10088.48 00:27:24.036 lat (usec): min=5989, max=94696, avg=13065.57, stdev=10088.51 00:27:24.036 clat percentiles (usec): 00:27:24.036 | 1.00th=[ 6521], 5.00th=[ 7177], 10.00th=[ 7701], 20.00th=[ 9110], 00:27:24.036 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10421], 60.00th=[11076], 00:27:24.036 | 70.00th=[12256], 80.00th=[13435], 90.00th=[15139], 95.00th=[49546], 00:27:24.036 | 99.00th=[53740], 99.50th=[55313], 99.90th=[91751], 99.95th=[94897], 00:27:24.036 | 99.99th=[94897] 00:27:24.036 bw ( KiB/s): min=20736, max=36864, per=40.34%, avg=29491.20, stdev=5280.75, samples=10 00:27:24.036 iops : min= 162, max= 288, avg=230.40, stdev=41.26, samples=10 00:27:24.036 lat (msec) : 10=40.35%, 20=54.20%, 50=0.87%, 100=4.59% 00:27:24.036 cpu : usr=90.68%, sys=8.76%, ctx=8, majf=0, minf=146 00:27:24.036 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:24.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.036 issued rwts: total=1155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.036 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:24.036 00:27:24.036 Run status group 0 (all jobs): 00:27:24.036 READ: bw=71.4MiB/s (74.9MB/s), 19.8MiB/s-28.6MiB/s (20.8MB/s-30.0MB/s), io=360MiB (378MB), run=5005-5046msec 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:24.036 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 bdev_null0 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 [2024-05-15 00:42:49.435413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 bdev_null1 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 bdev_null2 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:24.037 { 00:27:24.037 "params": { 00:27:24.037 "name": "Nvme$subsystem", 00:27:24.037 "trtype": "$TEST_TRANSPORT", 00:27:24.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.037 "adrfam": "ipv4", 00:27:24.037 "trsvcid": "$NVMF_PORT", 00:27:24.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.037 "hdgst": ${hdgst:-false}, 00:27:24.037 "ddgst": ${ddgst:-false} 00:27:24.037 }, 00:27:24.037 "method": "bdev_nvme_attach_controller" 00:27:24.037 } 00:27:24.037 EOF 00:27:24.037 )") 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:24.037 { 00:27:24.037 "params": { 00:27:24.037 "name": "Nvme$subsystem", 00:27:24.037 "trtype": "$TEST_TRANSPORT", 00:27:24.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.037 "adrfam": "ipv4", 00:27:24.037 "trsvcid": "$NVMF_PORT", 00:27:24.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.037 "hdgst": ${hdgst:-false}, 00:27:24.037 "ddgst": ${ddgst:-false} 00:27:24.037 }, 00:27:24.037 "method": "bdev_nvme_attach_controller" 00:27:24.037 } 00:27:24.037 EOF 00:27:24.037 )") 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:24.037 { 00:27:24.037 "params": { 00:27:24.037 "name": "Nvme$subsystem", 00:27:24.037 "trtype": "$TEST_TRANSPORT", 00:27:24.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.037 "adrfam": "ipv4", 00:27:24.037 "trsvcid": "$NVMF_PORT", 00:27:24.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.037 "hdgst": ${hdgst:-false}, 00:27:24.037 "ddgst": ${ddgst:-false} 00:27:24.037 }, 00:27:24.037 "method": "bdev_nvme_attach_controller" 00:27:24.037 } 00:27:24.037 EOF 00:27:24.037 )") 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:24.037 00:42:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:24.037 "params": { 00:27:24.037 "name": "Nvme0", 00:27:24.037 "trtype": "tcp", 00:27:24.037 "traddr": "10.0.0.2", 00:27:24.037 "adrfam": "ipv4", 00:27:24.037 "trsvcid": "4420", 00:27:24.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:24.038 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:24.038 "hdgst": false, 00:27:24.038 "ddgst": false 00:27:24.038 }, 00:27:24.038 "method": "bdev_nvme_attach_controller" 00:27:24.038 },{ 00:27:24.038 "params": { 00:27:24.038 "name": "Nvme1", 00:27:24.038 "trtype": "tcp", 00:27:24.038 "traddr": "10.0.0.2", 00:27:24.038 "adrfam": "ipv4", 00:27:24.038 "trsvcid": "4420", 00:27:24.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:24.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:24.038 "hdgst": false, 00:27:24.038 "ddgst": false 00:27:24.038 }, 00:27:24.038 "method": "bdev_nvme_attach_controller" 00:27:24.038 },{ 00:27:24.038 "params": { 00:27:24.038 "name": "Nvme2", 00:27:24.038 "trtype": "tcp", 00:27:24.038 "traddr": "10.0.0.2", 00:27:24.038 "adrfam": "ipv4", 00:27:24.038 "trsvcid": "4420", 00:27:24.038 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:24.038 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:24.038 "hdgst": false, 00:27:24.038 "ddgst": false 00:27:24.038 }, 00:27:24.038 "method": "bdev_nvme_attach_controller" 00:27:24.038 }' 00:27:24.038 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:24.038 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:24.038 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.038 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:24.038 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:27:24.038 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:24.038 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:24.038 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:24.038 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:24.038 00:42:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.038 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:24.038 ... 00:27:24.038 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:24.038 ... 00:27:24.038 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:24.038 ... 00:27:24.038 fio-3.35 00:27:24.038 Starting 24 threads 00:27:24.038 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.236 00:27:36.236 filename0: (groupid=0, jobs=1): err= 0: pid=1009083: Wed May 15 00:43:00 2024 00:27:36.236 read: IOPS=76, BW=304KiB/s (312kB/s)(3068KiB/10079msec) 00:27:36.236 slat (usec): min=8, max=302, avg=59.86, stdev=23.31 00:27:36.236 clat (msec): min=45, max=387, avg=209.65, stdev=56.36 00:27:36.236 lat (msec): min=45, max=387, avg=209.71, stdev=56.36 00:27:36.236 clat percentiles (msec): 00:27:36.236 | 1.00th=[ 60], 5.00th=[ 93], 10.00th=[ 127], 20.00th=[ 157], 00:27:36.236 | 30.00th=[ 209], 40.00th=[ 222], 50.00th=[ 226], 60.00th=[ 232], 00:27:36.236 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 266], 00:27:36.236 | 99.00th=[ 351], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:27:36.236 | 99.99th=[ 388] 00:27:36.236 bw ( KiB/s): min= 176, max= 616, per=5.08%, avg=300.40, stdev=92.81, samples=20 00:27:36.236 iops : min= 44, max= 154, avg=75.10, stdev=23.20, samples=20 00:27:36.236 lat (msec) : 50=0.91%, 100=5.35%, 250=76.53%, 500=17.21% 00:27:36.236 cpu : usr=95.32%, sys=2.48%, ctx=131, majf=0, minf=32 00:27:36.236 IO depths : 1=0.1%, 2=0.3%, 4=6.3%, 8=80.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:27:36.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.236 complete : 0=0.0%, 4=88.8%, 8=5.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.236 issued rwts: total=767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.236 filename0: (groupid=0, jobs=1): err= 0: pid=1009084: Wed May 15 00:43:00 2024 00:27:36.236 read: IOPS=71, BW=286KiB/s (293kB/s)(2880KiB/10076msec) 00:27:36.236 slat (nsec): min=4747, max=79566, avg=18689.07, stdev=15901.32 00:27:36.236 clat (msec): min=58, max=379, avg=223.03, stdev=40.88 00:27:36.236 lat (msec): min=58, max=379, avg=223.04, stdev=40.87 00:27:36.236 clat percentiles (msec): 00:27:36.236 | 1.00th=[ 59], 5.00th=[ 159], 10.00th=[ 182], 20.00th=[ 207], 00:27:36.236 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 230], 60.00th=[ 239], 00:27:36.236 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 271], 00:27:36.236 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 380], 99.95th=[ 380], 00:27:36.236 | 99.99th=[ 380] 00:27:36.236 bw ( KiB/s): min= 144, max= 384, per=4.76%, avg=281.60, stdev=65.54, samples=20 00:27:36.236 iops : min= 36, max= 96, avg=70.40, stdev=16.38, samples=20 00:27:36.236 lat (msec) : 100=4.44%, 250=76.81%, 500=18.75% 00:27:36.236 cpu : usr=98.22%, sys=1.37%, ctx=20, majf=0, minf=16 00:27:36.236 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:27:36.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.237 filename0: (groupid=0, jobs=1): err= 0: pid=1009085: Wed May 15 00:43:00 2024 00:27:36.237 read: IOPS=49, BW=197KiB/s (202kB/s)(1984KiB/10056msec) 00:27:36.237 slat (usec): min=8, max=106, avg=43.50, stdev=26.22 00:27:36.237 clat (msec): min=155, max=476, avg=324.08, stdev=69.92 00:27:36.237 lat (msec): min=155, max=476, avg=324.12, stdev=69.92 00:27:36.237 clat percentiles (msec): 00:27:36.237 | 1.00th=[ 199], 5.00th=[ 205], 10.00th=[ 226], 20.00th=[ 264], 00:27:36.237 | 30.00th=[ 288], 40.00th=[ 305], 50.00th=[ 317], 60.00th=[ 342], 00:27:36.237 | 70.00th=[ 376], 80.00th=[ 393], 90.00th=[ 418], 95.00th=[ 430], 00:27:36.237 | 99.00th=[ 430], 99.50th=[ 439], 99.90th=[ 477], 99.95th=[ 477], 00:27:36.237 | 99.99th=[ 477] 00:27:36.237 bw ( KiB/s): min= 128, max= 384, per=3.24%, avg=192.00, stdev=75.23, samples=20 00:27:36.237 iops : min= 32, max= 96, avg=48.00, stdev=18.81, samples=20 00:27:36.237 lat (msec) : 250=16.53%, 500=83.47% 00:27:36.237 cpu : usr=97.77%, sys=1.58%, ctx=51, majf=0, minf=15 00:27:36.237 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:27:36.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.237 filename0: (groupid=0, jobs=1): err= 0: pid=1009086: Wed May 15 00:43:00 2024 00:27:36.237 read: IOPS=68, BW=273KiB/s (280kB/s)(2752KiB/10065msec) 00:27:36.237 slat (nsec): min=7144, max=81643, avg=18132.53, stdev=14338.94 00:27:36.237 clat (msec): min=128, max=308, avg=233.19, stdev=22.97 00:27:36.237 lat (msec): min=128, max=308, avg=233.20, stdev=22.97 00:27:36.237 clat percentiles (msec): 00:27:36.237 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 199], 20.00th=[ 215], 00:27:36.237 | 30.00th=[ 222], 40.00th=[ 228], 50.00th=[ 236], 60.00th=[ 241], 00:27:36.237 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 271], 00:27:36.237 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 309], 99.95th=[ 309], 00:27:36.237 | 99.99th=[ 309] 00:27:36.237 bw ( KiB/s): min= 144, max= 384, per=4.54%, avg=268.75, stdev=52.08, samples=20 00:27:36.237 iops : min= 36, max= 96, avg=67.15, stdev=13.03, samples=20 00:27:36.237 lat (msec) : 250=76.16%, 500=23.84% 00:27:36.237 cpu : usr=97.06%, sys=1.83%, ctx=154, majf=0, minf=20 00:27:36.237 IO depths : 1=0.6%, 2=6.8%, 4=25.0%, 8=55.7%, 16=11.9%, 32=0.0%, >=64=0.0% 00:27:36.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.237 filename0: (groupid=0, jobs=1): err= 0: pid=1009087: Wed May 15 00:43:00 2024 00:27:36.237 read: IOPS=66, BW=265KiB/s (272kB/s)(2672KiB/10065msec) 00:27:36.237 slat (usec): min=8, max=111, avg=40.25, stdev=28.20 00:27:36.237 clat (msec): min=144, max=428, avg=240.78, stdev=40.61 00:27:36.237 lat (msec): min=144, max=428, avg=240.82, stdev=40.62 00:27:36.237 clat percentiles (msec): 00:27:36.237 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 199], 20.00th=[ 218], 00:27:36.237 | 30.00th=[ 224], 40.00th=[ 232], 50.00th=[ 239], 60.00th=[ 243], 00:27:36.237 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 271], 95.00th=[ 338], 00:27:36.237 | 99.00th=[ 405], 99.50th=[ 418], 99.90th=[ 430], 99.95th=[ 430], 00:27:36.237 | 99.99th=[ 430] 00:27:36.237 bw ( KiB/s): min= 128, max= 384, per=4.40%, avg=260.80, stdev=58.29, samples=20 00:27:36.237 iops : min= 32, max= 96, avg=65.20, stdev=14.57, samples=20 00:27:36.237 lat (msec) : 250=68.86%, 500=31.14% 00:27:36.237 cpu : usr=97.51%, sys=1.61%, ctx=67, majf=0, minf=21 00:27:36.237 IO depths : 1=1.0%, 2=5.5%, 4=18.3%, 8=63.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:27:36.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 complete : 0=0.0%, 4=92.8%, 8=3.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 issued rwts: total=668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.237 filename0: (groupid=0, jobs=1): err= 0: pid=1009088: Wed May 15 00:43:00 2024 00:27:36.237 read: IOPS=49, BW=197KiB/s (202kB/s)(1984KiB/10046msec) 00:27:36.237 slat (nsec): min=8783, max=83749, avg=26071.92, stdev=18879.98 00:27:36.237 clat (msec): min=128, max=526, avg=323.81, stdev=79.56 00:27:36.237 lat (msec): min=128, max=526, avg=323.84, stdev=79.55 00:27:36.237 clat percentiles (msec): 00:27:36.237 | 1.00th=[ 140], 5.00th=[ 194], 10.00th=[ 226], 20.00th=[ 255], 00:27:36.237 | 30.00th=[ 288], 40.00th=[ 309], 50.00th=[ 321], 60.00th=[ 342], 00:27:36.237 | 70.00th=[ 376], 80.00th=[ 397], 90.00th=[ 435], 95.00th=[ 435], 00:27:36.237 | 99.00th=[ 485], 99.50th=[ 506], 99.90th=[ 527], 99.95th=[ 527], 00:27:36.237 | 99.99th=[ 527] 00:27:36.237 bw ( KiB/s): min= 128, max= 384, per=3.25%, avg=192.00, stdev=73.96, samples=20 00:27:36.237 iops : min= 32, max= 96, avg=48.00, stdev=18.49, samples=20 00:27:36.237 lat (msec) : 250=16.94%, 500=82.26%, 750=0.81% 00:27:36.237 cpu : usr=97.93%, sys=1.38%, ctx=38, majf=0, minf=19 00:27:36.237 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:27:36.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.237 filename0: (groupid=0, jobs=1): err= 0: pid=1009089: Wed May 15 00:43:00 2024 00:27:36.237 read: IOPS=57, BW=231KiB/s (237kB/s)(2328KiB/10063msec) 00:27:36.237 slat (nsec): min=5709, max=49542, avg=17031.40, stdev=7805.69 00:27:36.237 clat (msec): min=138, max=457, avg=276.43, stdev=65.10 00:27:36.237 lat (msec): min=138, max=457, avg=276.45, stdev=65.10 00:27:36.237 clat percentiles (msec): 00:27:36.237 | 1.00th=[ 140], 5.00th=[ 201], 10.00th=[ 207], 20.00th=[ 230], 00:27:36.237 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 259], 60.00th=[ 279], 00:27:36.237 | 70.00th=[ 288], 80.00th=[ 309], 90.00th=[ 380], 95.00th=[ 393], 00:27:36.237 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 460], 99.95th=[ 460], 00:27:36.237 | 99.99th=[ 460] 00:27:36.237 bw ( KiB/s): min= 128, max= 384, per=3.83%, avg=226.40, stdev=63.00, samples=20 00:27:36.237 iops : min= 32, max= 96, avg=56.60, stdev=15.75, samples=20 00:27:36.237 lat (msec) : 250=42.27%, 500=57.73% 00:27:36.237 cpu : usr=98.02%, sys=1.51%, ctx=41, majf=0, minf=19 00:27:36.237 IO depths : 1=3.8%, 2=9.3%, 4=22.5%, 8=55.7%, 16=8.8%, 32=0.0%, >=64=0.0% 00:27:36.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.237 filename0: (groupid=0, jobs=1): err= 0: pid=1009090: Wed May 15 00:43:00 2024 00:27:36.237 read: IOPS=49, BW=197KiB/s (202kB/s)(1984KiB/10065msec) 00:27:36.237 slat (usec): min=7, max=242, avg=26.93, stdev=14.05 00:27:36.237 clat (msec): min=148, max=502, avg=323.44, stdev=71.23 00:27:36.237 lat (msec): min=148, max=502, avg=323.46, stdev=71.23 00:27:36.237 clat percentiles (msec): 00:27:36.237 | 1.00th=[ 201], 5.00th=[ 205], 10.00th=[ 207], 20.00th=[ 264], 00:27:36.237 | 30.00th=[ 288], 40.00th=[ 300], 50.00th=[ 317], 60.00th=[ 342], 00:27:36.237 | 70.00th=[ 376], 80.00th=[ 393], 90.00th=[ 418], 95.00th=[ 430], 00:27:36.237 | 99.00th=[ 443], 99.50th=[ 481], 99.90th=[ 502], 99.95th=[ 502], 00:27:36.237 | 99.99th=[ 502] 00:27:36.237 bw ( KiB/s): min= 127, max= 384, per=3.24%, avg=191.95, stdev=77.74, samples=20 00:27:36.237 iops : min= 31, max= 96, avg=47.95, stdev=19.47, samples=20 00:27:36.237 lat (msec) : 250=16.94%, 500=82.66%, 750=0.40% 00:27:36.237 cpu : usr=98.23%, sys=1.29%, ctx=27, majf=0, minf=13 00:27:36.237 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:27:36.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.237 filename1: (groupid=0, jobs=1): err= 0: pid=1009091: Wed May 15 00:43:00 2024 00:27:36.237 read: IOPS=49, BW=197KiB/s (202kB/s)(1984KiB/10046msec) 00:27:36.237 slat (nsec): min=9027, max=92434, avg=28141.60, stdev=20907.86 00:27:36.237 clat (msec): min=182, max=505, avg=323.81, stdev=70.82 00:27:36.237 lat (msec): min=182, max=505, avg=323.84, stdev=70.81 00:27:36.237 clat percentiles (msec): 00:27:36.237 | 1.00th=[ 197], 5.00th=[ 199], 10.00th=[ 226], 20.00th=[ 255], 00:27:36.237 | 30.00th=[ 288], 40.00th=[ 309], 50.00th=[ 317], 60.00th=[ 342], 00:27:36.237 | 70.00th=[ 376], 80.00th=[ 397], 90.00th=[ 422], 95.00th=[ 435], 00:27:36.237 | 99.00th=[ 472], 99.50th=[ 489], 99.90th=[ 506], 99.95th=[ 506], 00:27:36.237 | 99.99th=[ 506] 00:27:36.237 bw ( KiB/s): min= 128, max= 384, per=3.24%, avg=192.00, stdev=73.96, samples=20 00:27:36.237 iops : min= 32, max= 96, avg=48.00, stdev=18.49, samples=20 00:27:36.237 lat (msec) : 250=15.32%, 500=84.27%, 750=0.40% 00:27:36.237 cpu : usr=97.92%, sys=1.39%, ctx=65, majf=0, minf=19 00:27:36.237 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:27:36.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.237 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.237 filename1: (groupid=0, jobs=1): err= 0: pid=1009092: Wed May 15 00:43:00 2024 00:27:36.237 read: IOPS=74, BW=298KiB/s (306kB/s)(3008KiB/10082msec) 00:27:36.237 slat (usec): min=5, max=220, avg=26.46, stdev=26.39 00:27:36.237 clat (msec): min=47, max=304, avg=213.63, stdev=50.66 00:27:36.237 lat (msec): min=47, max=304, avg=213.65, stdev=50.66 00:27:36.237 clat percentiles (msec): 00:27:36.237 | 1.00th=[ 48], 5.00th=[ 126], 10.00th=[ 144], 20.00th=[ 167], 00:27:36.237 | 30.00th=[ 213], 40.00th=[ 222], 50.00th=[ 230], 60.00th=[ 239], 00:27:36.237 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 271], 00:27:36.237 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 305], 99.95th=[ 305], 00:27:36.237 | 99.99th=[ 305] 00:27:36.237 bw ( KiB/s): min= 144, max= 512, per=4.98%, avg=294.40, stdev=88.31, samples=20 00:27:36.237 iops : min= 36, max= 128, avg=73.60, stdev=22.08, samples=20 00:27:36.237 lat (msec) : 50=2.13%, 100=2.13%, 250=75.80%, 500=19.95% 00:27:36.237 cpu : usr=96.81%, sys=1.89%, ctx=225, majf=0, minf=31 00:27:36.238 IO depths : 1=0.9%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:27:36.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.238 filename1: (groupid=0, jobs=1): err= 0: pid=1009093: Wed May 15 00:43:00 2024 00:27:36.238 read: IOPS=47, BW=192KiB/s (196kB/s)(1920KiB/10013msec) 00:27:36.238 slat (nsec): min=5748, max=64008, avg=14077.74, stdev=7308.03 00:27:36.238 clat (msec): min=196, max=502, avg=333.64, stdev=73.79 00:27:36.238 lat (msec): min=196, max=502, avg=333.65, stdev=73.79 00:27:36.238 clat percentiles (msec): 00:27:36.238 | 1.00th=[ 197], 5.00th=[ 209], 10.00th=[ 226], 20.00th=[ 284], 00:27:36.238 | 30.00th=[ 288], 40.00th=[ 313], 50.00th=[ 321], 60.00th=[ 355], 00:27:36.238 | 70.00th=[ 384], 80.00th=[ 405], 90.00th=[ 426], 95.00th=[ 435], 00:27:36.238 | 99.00th=[ 485], 99.50th=[ 498], 99.90th=[ 502], 99.95th=[ 502], 00:27:36.238 | 99.99th=[ 502] 00:27:36.238 bw ( KiB/s): min= 127, max= 256, per=3.13%, avg=185.55, stdev=60.90, samples=20 00:27:36.238 iops : min= 31, max= 64, avg=46.35, stdev=15.26, samples=20 00:27:36.238 lat (msec) : 250=17.08%, 500=82.50%, 750=0.42% 00:27:36.238 cpu : usr=98.34%, sys=1.28%, ctx=22, majf=0, minf=15 00:27:36.238 IO depths : 1=2.5%, 2=8.8%, 4=25.0%, 8=53.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:27:36.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.238 filename1: (groupid=0, jobs=1): err= 0: pid=1009094: Wed May 15 00:43:00 2024 00:27:36.238 read: IOPS=65, BW=261KiB/s (267kB/s)(2624KiB/10067msec) 00:27:36.238 slat (nsec): min=8659, max=98942, avg=38414.81, stdev=26656.65 00:27:36.238 clat (msec): min=74, max=453, avg=245.18, stdev=44.84 00:27:36.238 lat (msec): min=74, max=453, avg=245.21, stdev=44.85 00:27:36.238 clat percentiles (msec): 00:27:36.238 | 1.00th=[ 140], 5.00th=[ 197], 10.00th=[ 211], 20.00th=[ 220], 00:27:36.238 | 30.00th=[ 226], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:27:36.238 | 70.00th=[ 249], 80.00th=[ 262], 90.00th=[ 284], 95.00th=[ 321], 00:27:36.238 | 99.00th=[ 384], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:27:36.238 | 99.99th=[ 456] 00:27:36.238 bw ( KiB/s): min= 128, max= 384, per=4.34%, avg=256.00, stdev=44.35, samples=20 00:27:36.238 iops : min= 32, max= 96, avg=64.00, stdev=11.09, samples=20 00:27:36.238 lat (msec) : 100=0.30%, 250=73.48%, 500=26.22% 00:27:36.238 cpu : usr=98.13%, sys=1.33%, ctx=33, majf=0, minf=20 00:27:36.238 IO depths : 1=4.4%, 2=9.3%, 4=20.9%, 8=57.3%, 16=8.1%, 32=0.0%, >=64=0.0% 00:27:36.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 complete : 0=0.0%, 4=93.0%, 8=1.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.238 filename1: (groupid=0, jobs=1): err= 0: pid=1009095: Wed May 15 00:43:00 2024 00:27:36.238 read: IOPS=64, BW=258KiB/s (264kB/s)(2584KiB/10009msec) 00:27:36.238 slat (usec): min=8, max=173, avg=19.67, stdev=18.61 00:27:36.238 clat (msec): min=162, max=411, avg=247.74, stdev=36.63 00:27:36.238 lat (msec): min=162, max=411, avg=247.76, stdev=36.63 00:27:36.238 clat percentiles (msec): 00:27:36.238 | 1.00th=[ 192], 5.00th=[ 201], 10.00th=[ 207], 20.00th=[ 222], 00:27:36.238 | 30.00th=[ 226], 40.00th=[ 234], 50.00th=[ 245], 60.00th=[ 247], 00:27:36.238 | 70.00th=[ 262], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 317], 00:27:36.238 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 409], 99.95th=[ 409], 00:27:36.238 | 99.99th=[ 409] 00:27:36.238 bw ( KiB/s): min= 128, max= 368, per=4.25%, avg=252.00, stdev=44.92, samples=20 00:27:36.238 iops : min= 32, max= 92, avg=63.00, stdev=11.23, samples=20 00:27:36.238 lat (msec) : 250=64.40%, 500=35.60% 00:27:36.238 cpu : usr=97.30%, sys=1.84%, ctx=39, majf=0, minf=18 00:27:36.238 IO depths : 1=2.8%, 2=7.9%, 4=21.5%, 8=58.0%, 16=9.8%, 32=0.0%, >=64=0.0% 00:27:36.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.238 filename1: (groupid=0, jobs=1): err= 0: pid=1009096: Wed May 15 00:43:00 2024 00:27:36.238 read: IOPS=69, BW=280KiB/s (286kB/s)(2816KiB/10065msec) 00:27:36.238 slat (nsec): min=8491, max=67434, avg=17872.52, stdev=11540.12 00:27:36.238 clat (msec): min=148, max=325, avg=227.87, stdev=28.17 00:27:36.238 lat (msec): min=148, max=325, avg=227.89, stdev=28.16 00:27:36.238 clat percentiles (msec): 00:27:36.238 | 1.00th=[ 148], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 211], 00:27:36.238 | 30.00th=[ 220], 40.00th=[ 224], 50.00th=[ 232], 60.00th=[ 239], 00:27:36.238 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 271], 00:27:36.238 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 326], 99.95th=[ 326], 00:27:36.238 | 99.99th=[ 326] 00:27:36.238 bw ( KiB/s): min= 144, max= 384, per=4.66%, avg=275.15, stdev=57.73, samples=20 00:27:36.238 iops : min= 36, max= 96, avg=68.75, stdev=14.45, samples=20 00:27:36.238 lat (msec) : 250=81.25%, 500=18.75% 00:27:36.238 cpu : usr=98.33%, sys=1.25%, ctx=51, majf=0, minf=18 00:27:36.238 IO depths : 1=1.7%, 2=8.0%, 4=25.0%, 8=54.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:27:36.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.238 filename1: (groupid=0, jobs=1): err= 0: pid=1009097: Wed May 15 00:43:00 2024 00:27:36.238 read: IOPS=74, BW=298KiB/s (306kB/s)(3008KiB/10078msec) 00:27:36.238 slat (nsec): min=4750, max=84460, avg=19612.60, stdev=17805.56 00:27:36.238 clat (msec): min=45, max=357, avg=213.56, stdev=49.55 00:27:36.238 lat (msec): min=45, max=357, avg=213.58, stdev=49.55 00:27:36.238 clat percentiles (msec): 00:27:36.238 | 1.00th=[ 46], 5.00th=[ 128], 10.00th=[ 146], 20.00th=[ 182], 00:27:36.238 | 30.00th=[ 211], 40.00th=[ 220], 50.00th=[ 226], 60.00th=[ 239], 00:27:36.238 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 271], 00:27:36.238 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 359], 99.95th=[ 359], 00:27:36.238 | 99.99th=[ 359] 00:27:36.238 bw ( KiB/s): min= 144, max= 513, per=4.98%, avg=294.45, stdev=83.10, samples=20 00:27:36.238 iops : min= 36, max= 128, avg=73.60, stdev=20.74, samples=20 00:27:36.238 lat (msec) : 50=2.13%, 100=2.13%, 250=77.93%, 500=17.82% 00:27:36.238 cpu : usr=97.82%, sys=1.72%, ctx=26, majf=0, minf=19 00:27:36.238 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:36.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.238 filename1: (groupid=0, jobs=1): err= 0: pid=1009098: Wed May 15 00:43:00 2024 00:27:36.238 read: IOPS=61, BW=248KiB/s (254kB/s)(2496KiB/10065msec) 00:27:36.238 slat (usec): min=7, max=161, avg=21.20, stdev=15.82 00:27:36.238 clat (msec): min=160, max=458, avg=257.09, stdev=47.75 00:27:36.238 lat (msec): min=160, max=458, avg=257.11, stdev=47.76 00:27:36.238 clat percentiles (msec): 00:27:36.238 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 199], 20.00th=[ 228], 00:27:36.238 | 30.00th=[ 236], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:27:36.238 | 70.00th=[ 266], 80.00th=[ 288], 90.00th=[ 338], 95.00th=[ 376], 00:27:36.238 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 460], 99.95th=[ 460], 00:27:36.238 | 99.99th=[ 460] 00:27:36.238 bw ( KiB/s): min= 128, max= 384, per=4.12%, avg=243.15, stdev=67.79, samples=20 00:27:36.238 iops : min= 32, max= 96, avg=60.75, stdev=16.94, samples=20 00:27:36.238 lat (msec) : 250=53.85%, 500=46.15% 00:27:36.238 cpu : usr=96.60%, sys=1.97%, ctx=46, majf=0, minf=15 00:27:36.238 IO depths : 1=2.9%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:27:36.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.238 filename2: (groupid=0, jobs=1): err= 0: pid=1009099: Wed May 15 00:43:00 2024 00:27:36.238 read: IOPS=49, BW=197KiB/s (202kB/s)(1984KiB/10048msec) 00:27:36.238 slat (usec): min=6, max=103, avg=45.93, stdev=27.77 00:27:36.238 clat (msec): min=194, max=504, avg=323.73, stdev=67.43 00:27:36.238 lat (msec): min=194, max=504, avg=323.78, stdev=67.43 00:27:36.238 clat percentiles (msec): 00:27:36.238 | 1.00th=[ 194], 5.00th=[ 199], 10.00th=[ 226], 20.00th=[ 271], 00:27:36.238 | 30.00th=[ 288], 40.00th=[ 309], 50.00th=[ 317], 60.00th=[ 338], 00:27:36.238 | 70.00th=[ 376], 80.00th=[ 388], 90.00th=[ 409], 95.00th=[ 430], 00:27:36.238 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 506], 99.95th=[ 506], 00:27:36.238 | 99.99th=[ 506] 00:27:36.238 bw ( KiB/s): min= 127, max= 384, per=3.24%, avg=191.95, stdev=76.51, samples=20 00:27:36.238 iops : min= 31, max= 96, avg=47.95, stdev=19.16, samples=20 00:27:36.238 lat (msec) : 250=13.31%, 500=86.29%, 750=0.40% 00:27:36.238 cpu : usr=98.12%, sys=1.41%, ctx=38, majf=0, minf=15 00:27:36.238 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:27:36.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.238 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.238 filename2: (groupid=0, jobs=1): err= 0: pid=1009100: Wed May 15 00:43:00 2024 00:27:36.238 read: IOPS=47, BW=191KiB/s (196kB/s)(1920KiB/10045msec) 00:27:36.238 slat (usec): min=8, max=279, avg=65.52, stdev=23.15 00:27:36.238 clat (msec): min=148, max=502, avg=333.23, stdev=75.43 00:27:36.238 lat (msec): min=148, max=502, avg=333.30, stdev=75.43 00:27:36.238 clat percentiles (msec): 00:27:36.238 | 1.00th=[ 197], 5.00th=[ 209], 10.00th=[ 224], 20.00th=[ 284], 00:27:36.238 | 30.00th=[ 288], 40.00th=[ 313], 50.00th=[ 321], 60.00th=[ 380], 00:27:36.238 | 70.00th=[ 397], 80.00th=[ 414], 90.00th=[ 426], 95.00th=[ 447], 00:27:36.238 | 99.00th=[ 493], 99.50th=[ 502], 99.90th=[ 502], 99.95th=[ 502], 00:27:36.238 | 99.99th=[ 502] 00:27:36.238 bw ( KiB/s): min= 128, max= 256, per=3.13%, avg=185.60, stdev=63.87, samples=20 00:27:36.238 iops : min= 32, max= 64, avg=46.40, stdev=15.97, samples=20 00:27:36.239 lat (msec) : 250=17.92%, 500=81.67%, 750=0.42% 00:27:36.239 cpu : usr=96.32%, sys=1.97%, ctx=128, majf=0, minf=15 00:27:36.239 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:27:36.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.239 filename2: (groupid=0, jobs=1): err= 0: pid=1009101: Wed May 15 00:43:00 2024 00:27:36.239 read: IOPS=71, BW=287KiB/s (294kB/s)(2880KiB/10042msec) 00:27:36.239 slat (usec): min=5, max=213, avg=28.48, stdev=26.59 00:27:36.239 clat (msec): min=45, max=311, avg=222.89, stdev=50.13 00:27:36.239 lat (msec): min=45, max=311, avg=222.92, stdev=50.13 00:27:36.239 clat percentiles (msec): 00:27:36.239 | 1.00th=[ 46], 5.00th=[ 128], 10.00th=[ 153], 20.00th=[ 211], 00:27:36.239 | 30.00th=[ 220], 40.00th=[ 228], 50.00th=[ 236], 60.00th=[ 239], 00:27:36.239 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 271], 95.00th=[ 275], 00:27:36.239 | 99.00th=[ 313], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:27:36.239 | 99.99th=[ 313] 00:27:36.239 bw ( KiB/s): min= 128, max= 512, per=4.76%, avg=281.60, stdev=78.80, samples=20 00:27:36.239 iops : min= 32, max= 128, avg=70.40, stdev=19.70, samples=20 00:27:36.239 lat (msec) : 50=2.22%, 100=2.22%, 250=72.92%, 500=22.64% 00:27:36.239 cpu : usr=96.11%, sys=2.28%, ctx=56, majf=0, minf=19 00:27:36.239 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:36.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.239 filename2: (groupid=0, jobs=1): err= 0: pid=1009102: Wed May 15 00:43:00 2024 00:27:36.239 read: IOPS=70, BW=282KiB/s (289kB/s)(2840KiB/10073msec) 00:27:36.239 slat (usec): min=6, max=116, avg=37.56, stdev=28.08 00:27:36.239 clat (msec): min=39, max=348, avg=226.03, stdev=48.54 00:27:36.239 lat (msec): min=39, max=348, avg=226.07, stdev=48.54 00:27:36.239 clat percentiles (msec): 00:27:36.239 | 1.00th=[ 41], 5.00th=[ 127], 10.00th=[ 190], 20.00th=[ 211], 00:27:36.239 | 30.00th=[ 220], 40.00th=[ 226], 50.00th=[ 232], 60.00th=[ 241], 00:27:36.239 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 271], 95.00th=[ 275], 00:27:36.239 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 351], 99.95th=[ 351], 00:27:36.239 | 99.99th=[ 351] 00:27:36.239 bw ( KiB/s): min= 144, max= 512, per=4.69%, avg=277.60, stdev=71.23, samples=20 00:27:36.239 iops : min= 36, max= 128, avg=69.40, stdev=17.81, samples=20 00:27:36.239 lat (msec) : 50=2.25%, 100=2.25%, 250=72.11%, 500=23.38% 00:27:36.239 cpu : usr=96.97%, sys=1.91%, ctx=33, majf=0, minf=18 00:27:36.239 IO depths : 1=1.3%, 2=7.2%, 4=23.9%, 8=56.3%, 16=11.3%, 32=0.0%, >=64=0.0% 00:27:36.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 complete : 0=0.0%, 4=94.1%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.239 filename2: (groupid=0, jobs=1): err= 0: pid=1009103: Wed May 15 00:43:00 2024 00:27:36.239 read: IOPS=68, BW=273KiB/s (280kB/s)(2752KiB/10065msec) 00:27:36.239 slat (nsec): min=8464, max=77332, avg=19784.31, stdev=16528.91 00:27:36.239 clat (msec): min=128, max=322, avg=233.17, stdev=24.41 00:27:36.239 lat (msec): min=128, max=322, avg=233.19, stdev=24.40 00:27:36.239 clat percentiles (msec): 00:27:36.239 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 199], 20.00th=[ 215], 00:27:36.239 | 30.00th=[ 222], 40.00th=[ 228], 50.00th=[ 236], 60.00th=[ 241], 00:27:36.239 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 271], 00:27:36.239 | 99.00th=[ 275], 99.50th=[ 309], 99.90th=[ 321], 99.95th=[ 321], 00:27:36.239 | 99.99th=[ 321] 00:27:36.239 bw ( KiB/s): min= 144, max= 368, per=4.54%, avg=268.75, stdev=49.97, samples=20 00:27:36.239 iops : min= 36, max= 92, avg=67.15, stdev=12.50, samples=20 00:27:36.239 lat (msec) : 250=75.87%, 500=24.13% 00:27:36.239 cpu : usr=98.37%, sys=1.24%, ctx=23, majf=0, minf=16 00:27:36.239 IO depths : 1=0.9%, 2=7.1%, 4=25.0%, 8=55.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:27:36.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.239 filename2: (groupid=0, jobs=1): err= 0: pid=1009104: Wed May 15 00:43:00 2024 00:27:36.239 read: IOPS=54, BW=216KiB/s (221kB/s)(2176KiB/10061msec) 00:27:36.239 slat (usec): min=8, max=223, avg=41.30, stdev=37.95 00:27:36.239 clat (msec): min=140, max=467, avg=295.53, stdev=73.44 00:27:36.239 lat (msec): min=140, max=467, avg=295.57, stdev=73.45 00:27:36.239 clat percentiles (msec): 00:27:36.239 | 1.00th=[ 142], 5.00th=[ 201], 10.00th=[ 207], 20.00th=[ 236], 00:27:36.239 | 30.00th=[ 251], 40.00th=[ 262], 50.00th=[ 288], 60.00th=[ 296], 00:27:36.239 | 70.00th=[ 321], 80.00th=[ 380], 90.00th=[ 393], 95.00th=[ 418], 00:27:36.239 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 468], 99.95th=[ 468], 00:27:36.239 | 99.99th=[ 468] 00:27:36.239 bw ( KiB/s): min= 128, max= 384, per=3.57%, avg=211.20, stdev=73.89, samples=20 00:27:36.239 iops : min= 32, max= 96, avg=52.80, stdev=18.47, samples=20 00:27:36.239 lat (msec) : 250=31.99%, 500=68.01% 00:27:36.239 cpu : usr=96.31%, sys=2.24%, ctx=141, majf=0, minf=18 00:27:36.239 IO depths : 1=4.6%, 2=10.7%, 4=24.4%, 8=52.4%, 16=7.9%, 32=0.0%, >=64=0.0% 00:27:36.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.239 filename2: (groupid=0, jobs=1): err= 0: pid=1009105: Wed May 15 00:43:00 2024 00:27:36.239 read: IOPS=71, BW=286KiB/s (293kB/s)(2880KiB/10070msec) 00:27:36.239 slat (nsec): min=8631, max=80501, avg=16860.24, stdev=10461.43 00:27:36.239 clat (msec): min=127, max=274, avg=222.83, stdev=33.81 00:27:36.239 lat (msec): min=127, max=274, avg=222.84, stdev=33.81 00:27:36.239 clat percentiles (msec): 00:27:36.239 | 1.00th=[ 128], 5.00th=[ 146], 10.00th=[ 165], 20.00th=[ 205], 00:27:36.239 | 30.00th=[ 215], 40.00th=[ 222], 50.00th=[ 230], 60.00th=[ 239], 00:27:36.239 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 271], 00:27:36.239 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:27:36.239 | 99.99th=[ 275] 00:27:36.239 bw ( KiB/s): min= 144, max= 384, per=4.76%, avg=281.55, stdev=60.87, samples=20 00:27:36.239 iops : min= 36, max= 96, avg=70.35, stdev=15.24, samples=20 00:27:36.239 lat (msec) : 250=82.08%, 500=17.92% 00:27:36.239 cpu : usr=98.12%, sys=1.32%, ctx=24, majf=0, minf=26 00:27:36.239 IO depths : 1=0.4%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:27:36.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.239 filename2: (groupid=0, jobs=1): err= 0: pid=1009106: Wed May 15 00:43:00 2024 00:27:36.239 read: IOPS=49, BW=197KiB/s (202kB/s)(1984KiB/10059msec) 00:27:36.239 slat (nsec): min=4068, max=48794, avg=23123.11, stdev=5875.01 00:27:36.239 clat (msec): min=139, max=515, avg=324.28, stdev=75.72 00:27:36.239 lat (msec): min=139, max=515, avg=324.30, stdev=75.72 00:27:36.239 clat percentiles (msec): 00:27:36.239 | 1.00th=[ 163], 5.00th=[ 205], 10.00th=[ 207], 20.00th=[ 262], 00:27:36.239 | 30.00th=[ 288], 40.00th=[ 300], 50.00th=[ 317], 60.00th=[ 342], 00:27:36.239 | 70.00th=[ 380], 80.00th=[ 393], 90.00th=[ 430], 95.00th=[ 430], 00:27:36.239 | 99.00th=[ 493], 99.50th=[ 514], 99.90th=[ 514], 99.95th=[ 514], 00:27:36.239 | 99.99th=[ 514] 00:27:36.239 bw ( KiB/s): min= 128, max= 384, per=3.24%, avg=192.00, stdev=71.37, samples=20 00:27:36.239 iops : min= 32, max= 96, avg=48.00, stdev=17.84, samples=20 00:27:36.239 lat (msec) : 250=18.15%, 500=81.05%, 750=0.81% 00:27:36.239 cpu : usr=97.66%, sys=1.70%, ctx=40, majf=0, minf=15 00:27:36.239 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:27:36.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.239 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:36.239 00:27:36.239 Run status group 0 (all jobs): 00:27:36.239 READ: bw=5902KiB/s (6044kB/s), 191KiB/s-304KiB/s (196kB/s-312kB/s), io=58.1MiB (60.9MB), run=10009-10082msec 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.239 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.240 bdev_null0 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.240 [2024-05-15 00:43:01.236484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.240 bdev_null1 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.240 { 00:27:36.240 "params": { 00:27:36.240 "name": "Nvme$subsystem", 00:27:36.240 "trtype": "$TEST_TRANSPORT", 00:27:36.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.240 "adrfam": "ipv4", 00:27:36.240 "trsvcid": "$NVMF_PORT", 00:27:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.240 "hdgst": ${hdgst:-false}, 00:27:36.240 "ddgst": ${ddgst:-false} 00:27:36.240 }, 00:27:36.240 "method": "bdev_nvme_attach_controller" 00:27:36.240 } 00:27:36.240 EOF 00:27:36.240 )") 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.240 { 00:27:36.240 "params": { 00:27:36.240 "name": "Nvme$subsystem", 00:27:36.240 "trtype": "$TEST_TRANSPORT", 00:27:36.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.240 "adrfam": "ipv4", 00:27:36.240 "trsvcid": "$NVMF_PORT", 00:27:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.240 "hdgst": ${hdgst:-false}, 00:27:36.240 "ddgst": ${ddgst:-false} 00:27:36.240 }, 00:27:36.240 "method": "bdev_nvme_attach_controller" 00:27:36.240 } 00:27:36.240 EOF 00:27:36.240 )") 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:36.240 00:43:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:36.240 "params": { 00:27:36.240 "name": "Nvme0", 00:27:36.240 "trtype": "tcp", 00:27:36.240 "traddr": "10.0.0.2", 00:27:36.240 "adrfam": "ipv4", 00:27:36.240 "trsvcid": "4420", 00:27:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.240 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:36.240 "hdgst": false, 00:27:36.240 "ddgst": false 00:27:36.240 }, 00:27:36.240 "method": "bdev_nvme_attach_controller" 00:27:36.240 },{ 00:27:36.240 "params": { 00:27:36.240 "name": "Nvme1", 00:27:36.240 "trtype": "tcp", 00:27:36.240 "traddr": "10.0.0.2", 00:27:36.240 "adrfam": "ipv4", 00:27:36.240 "trsvcid": "4420", 00:27:36.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:36.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:36.240 "hdgst": false, 00:27:36.240 "ddgst": false 00:27:36.240 }, 00:27:36.240 "method": "bdev_nvme_attach_controller" 00:27:36.240 }' 00:27:36.241 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:36.241 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:36.241 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:36.241 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:36.241 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:27:36.241 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:36.241 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:36.241 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:36.241 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:36.241 00:43:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.241 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:36.241 ... 00:27:36.241 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:36.241 ... 00:27:36.241 fio-3.35 00:27:36.241 Starting 4 threads 00:27:36.241 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.513 00:27:41.513 filename0: (groupid=0, jobs=1): err= 0: pid=1010494: Wed May 15 00:43:07 2024 00:27:41.513 read: IOPS=1777, BW=13.9MiB/s (14.6MB/s)(69.5MiB/5002msec) 00:27:41.513 slat (nsec): min=4696, max=45440, avg=14109.22, stdev=6679.54 00:27:41.513 clat (usec): min=2403, max=9565, avg=4458.99, stdev=746.11 00:27:41.513 lat (usec): min=2418, max=9578, avg=4473.09, stdev=746.93 00:27:41.513 clat percentiles (usec): 00:27:41.513 | 1.00th=[ 3359], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3916], 00:27:41.513 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:27:41.513 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 5735], 95.00th=[ 6521], 00:27:41.513 | 99.00th=[ 6718], 99.50th=[ 6783], 99.90th=[ 7308], 99.95th=[ 8291], 00:27:41.513 | 99.99th=[ 9503] 00:27:41.513 bw ( KiB/s): min=13952, max=14509, per=25.11%, avg=14215.70, stdev=227.38, samples=10 00:27:41.513 iops : min= 1744, max= 1813, avg=1776.90, stdev=28.33, samples=10 00:27:41.513 lat (msec) : 4=21.68%, 10=78.32% 00:27:41.513 cpu : usr=94.58%, sys=4.40%, ctx=29, majf=0, minf=0 00:27:41.513 IO depths : 1=0.1%, 2=0.6%, 4=71.5%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.513 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.513 issued rwts: total=8891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.513 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:41.513 filename0: (groupid=0, jobs=1): err= 0: pid=1010495: Wed May 15 00:43:07 2024 00:27:41.513 read: IOPS=1738, BW=13.6MiB/s (14.2MB/s)(67.9MiB/5001msec) 00:27:41.513 slat (nsec): min=4553, max=46895, avg=14893.38, stdev=5947.72 00:27:41.513 clat (usec): min=1552, max=45487, avg=4556.46, stdev=1465.76 00:27:41.513 lat (usec): min=1564, max=45504, avg=4571.35, stdev=1464.80 00:27:41.513 clat percentiles (usec): 00:27:41.513 | 1.00th=[ 3589], 5.00th=[ 3884], 10.00th=[ 4015], 20.00th=[ 4080], 00:27:41.513 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:27:41.513 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 5735], 95.00th=[ 6652], 00:27:41.513 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 9765], 99.95th=[45351], 00:27:41.513 | 99.99th=[45351] 00:27:41.513 bw ( KiB/s): min=12912, max=14336, per=24.55%, avg=13898.67, stdev=402.39, samples=9 00:27:41.513 iops : min= 1614, max= 1792, avg=1737.33, stdev=50.30, samples=9 00:27:41.513 lat (msec) : 2=0.03%, 4=9.82%, 10=90.05%, 50=0.09% 00:27:41.513 cpu : usr=95.10%, sys=4.34%, ctx=5, majf=0, minf=9 00:27:41.513 IO depths : 1=0.2%, 2=0.8%, 4=72.0%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.513 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.513 issued rwts: total=8695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.513 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:41.513 filename1: (groupid=0, jobs=1): err= 0: pid=1010496: Wed May 15 00:43:07 2024 00:27:41.513 read: IOPS=1835, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5003msec) 00:27:41.513 slat (nsec): min=4346, max=47360, avg=13657.30, stdev=5616.99 00:27:41.513 clat (usec): min=2452, max=8787, avg=4314.73, stdev=557.67 00:27:41.513 lat (usec): min=2472, max=8803, avg=4328.39, stdev=557.96 00:27:41.513 clat percentiles (usec): 00:27:41.513 | 1.00th=[ 3294], 5.00th=[ 3752], 10.00th=[ 3818], 20.00th=[ 3916], 00:27:41.513 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:27:41.513 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5276], 00:27:41.513 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 8094], 99.95th=[ 8586], 00:27:41.513 | 99.99th=[ 8848] 00:27:41.514 bw ( KiB/s): min=13712, max=15360, per=25.92%, avg=14678.40, stdev=664.86, samples=10 00:27:41.514 iops : min= 1714, max= 1920, avg=1834.80, stdev=83.11, samples=10 00:27:41.514 lat (msec) : 4=25.29%, 10=74.71% 00:27:41.514 cpu : usr=92.86%, sys=5.06%, ctx=246, majf=0, minf=9 00:27:41.514 IO depths : 1=0.1%, 2=8.0%, 4=65.2%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.514 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.514 issued rwts: total=9182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.514 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:41.514 filename1: (groupid=0, jobs=1): err= 0: pid=1010497: Wed May 15 00:43:07 2024 00:27:41.514 read: IOPS=1727, BW=13.5MiB/s (14.2MB/s)(67.5MiB/5002msec) 00:27:41.514 slat (nsec): min=4847, max=48383, avg=13739.65, stdev=5660.51 00:27:41.514 clat (usec): min=1713, max=50060, avg=4589.15, stdev=1610.90 00:27:41.514 lat (usec): min=1722, max=50073, avg=4602.89, stdev=1610.46 00:27:41.514 clat percentiles (usec): 00:27:41.514 | 1.00th=[ 3261], 5.00th=[ 3851], 10.00th=[ 3949], 20.00th=[ 4015], 00:27:41.514 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4359], 60.00th=[ 4424], 00:27:41.514 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 6063], 95.00th=[ 6456], 00:27:41.514 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7635], 99.95th=[50070], 00:27:41.514 | 99.99th=[50070] 00:27:41.514 bw ( KiB/s): min=12736, max=14544, per=24.40%, avg=13814.40, stdev=519.24, samples=10 00:27:41.514 iops : min= 1592, max= 1818, avg=1726.80, stdev=64.90, samples=10 00:27:41.514 lat (msec) : 2=0.06%, 4=18.22%, 10=81.62%, 100=0.09% 00:27:41.514 cpu : usr=93.40%, sys=4.42%, ctx=231, majf=0, minf=9 00:27:41.514 IO depths : 1=0.1%, 2=0.8%, 4=71.7%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.514 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.514 issued rwts: total=8642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.514 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:41.514 00:27:41.514 Run status group 0 (all jobs): 00:27:41.514 READ: bw=55.3MiB/s (58.0MB/s), 13.5MiB/s-14.3MiB/s (14.2MB/s-15.0MB/s), io=277MiB (290MB), run=5001-5003msec 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.514 00:27:41.514 real 0m24.402s 00:27:41.514 user 4m32.403s 00:27:41.514 sys 0m7.146s 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:41.514 00:43:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:41.514 ************************************ 00:27:41.514 END TEST fio_dif_rand_params 00:27:41.514 ************************************ 00:27:41.514 00:43:07 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:41.514 00:43:07 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:41.514 00:43:07 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:41.514 00:43:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:41.514 ************************************ 00:27:41.514 START TEST fio_dif_digest 00:27:41.514 ************************************ 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # fio_dif_digest 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:41.514 bdev_null0 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.514 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:41.773 [2024-05-15 00:43:07.686125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.773 { 00:27:41.773 "params": { 00:27:41.773 "name": "Nvme$subsystem", 00:27:41.773 "trtype": "$TEST_TRANSPORT", 00:27:41.773 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.773 "adrfam": "ipv4", 00:27:41.773 "trsvcid": "$NVMF_PORT", 00:27:41.773 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.773 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.773 "hdgst": ${hdgst:-false}, 00:27:41.773 "ddgst": ${ddgst:-false} 00:27:41.773 }, 00:27:41.773 "method": "bdev_nvme_attach_controller" 00:27:41.773 } 00:27:41.773 EOF 00:27:41.773 )") 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local sanitizers 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # shift 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local asan_lib= 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libasan 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:41.773 "params": { 00:27:41.773 "name": "Nvme0", 00:27:41.773 "trtype": "tcp", 00:27:41.773 "traddr": "10.0.0.2", 00:27:41.773 "adrfam": "ipv4", 00:27:41.773 "trsvcid": "4420", 00:27:41.773 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:41.773 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:41.773 "hdgst": true, 00:27:41.773 "ddgst": true 00:27:41.773 }, 00:27:41.773 "method": "bdev_nvme_attach_controller" 00:27:41.773 }' 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:41.773 00:43:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.031 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:42.031 ... 00:27:42.031 fio-3.35 00:27:42.031 Starting 3 threads 00:27:42.031 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.254 00:27:54.254 filename0: (groupid=0, jobs=1): err= 0: pid=1011364: Wed May 15 00:43:18 2024 00:27:54.254 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(237MiB/10048msec) 00:27:54.254 slat (nsec): min=4947, max=29254, avg=14796.86, stdev=1843.28 00:27:54.254 clat (usec): min=6251, max=94755, avg=15889.25, stdev=10561.30 00:27:54.254 lat (usec): min=6264, max=94770, avg=15904.04, stdev=10561.31 00:27:54.254 clat percentiles (usec): 00:27:54.254 | 1.00th=[ 8225], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10814], 00:27:54.254 | 30.00th=[12780], 40.00th=[13566], 50.00th=[13960], 60.00th=[14353], 00:27:54.254 | 70.00th=[14746], 80.00th=[15139], 90.00th=[16057], 95.00th=[53740], 00:27:54.254 | 99.00th=[56361], 99.50th=[56886], 99.90th=[93848], 99.95th=[94897], 00:27:54.254 | 99.99th=[94897] 00:27:54.254 bw ( KiB/s): min=18688, max=29696, per=31.76%, avg=24192.00, stdev=2992.95, samples=20 00:27:54.254 iops : min= 146, max= 232, avg=189.00, stdev=23.38, samples=20 00:27:54.254 lat (msec) : 10=11.42%, 20=82.08%, 50=0.26%, 100=6.24% 00:27:54.254 cpu : usr=91.55%, sys=7.41%, ctx=302, majf=0, minf=122 00:27:54.254 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:54.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.254 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.254 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:54.254 filename0: (groupid=0, jobs=1): err= 0: pid=1011365: Wed May 15 00:43:18 2024 00:27:54.254 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(273MiB/10046msec) 00:27:54.254 slat (nsec): min=5103, max=63159, avg=14167.47, stdev=1920.84 00:27:54.254 clat (usec): min=7355, max=96904, avg=13749.20, stdev=5636.28 00:27:54.254 lat (usec): min=7369, max=96917, avg=13763.37, stdev=5636.29 00:27:54.254 clat percentiles (usec): 00:27:54.254 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[10552], 20.00th=[11076], 00:27:54.254 | 30.00th=[11863], 40.00th=[12911], 50.00th=[13566], 60.00th=[14091], 00:27:54.254 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[15926], 00:27:54.254 | 99.00th=[53216], 99.50th=[55837], 99.90th=[57410], 99.95th=[95945], 00:27:54.254 | 99.99th=[96994] 00:27:54.254 bw ( KiB/s): min=21760, max=32512, per=36.70%, avg=27955.20, stdev=2454.90, samples=20 00:27:54.254 iops : min= 170, max= 254, avg=218.40, stdev=19.18, samples=20 00:27:54.254 lat (msec) : 10=4.35%, 20=94.14%, 50=0.18%, 100=1.33% 00:27:54.254 cpu : usr=92.86%, sys=6.58%, ctx=17, majf=0, minf=164 00:27:54.254 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:54.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.254 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.254 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:54.254 filename0: (groupid=0, jobs=1): err= 0: pid=1011366: Wed May 15 00:43:18 2024 00:27:54.254 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(238MiB/10046msec) 00:27:54.254 slat (nsec): min=5010, max=39065, avg=18533.21, stdev=2610.06 00:27:54.254 clat (usec): min=6135, max=59000, avg=15776.90, stdev=10299.96 00:27:54.254 lat (usec): min=6149, max=59017, avg=15795.43, stdev=10299.98 00:27:54.254 clat percentiles (usec): 00:27:54.254 | 1.00th=[ 6849], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10552], 00:27:54.254 | 30.00th=[12911], 40.00th=[13566], 50.00th=[14091], 60.00th=[14484], 00:27:54.254 | 70.00th=[14877], 80.00th=[15401], 90.00th=[16319], 95.00th=[54264], 00:27:54.254 | 99.00th=[56886], 99.50th=[56886], 99.90th=[58983], 99.95th=[58983], 00:27:54.254 | 99.99th=[58983] 00:27:54.254 bw ( KiB/s): min=19968, max=29952, per=31.91%, avg=24309.45, stdev=2645.18, samples=20 00:27:54.254 iops : min= 156, max= 234, avg=189.90, stdev=20.68, samples=20 00:27:54.254 lat (msec) : 10=14.67%, 20=79.13%, 50=0.26%, 100=5.94% 00:27:54.254 cpu : usr=91.99%, sys=7.37%, ctx=18, majf=0, minf=146 00:27:54.254 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:54.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.254 issued rwts: total=1902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.254 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:54.254 00:27:54.254 Run status group 0 (all jobs): 00:27:54.254 READ: bw=74.4MiB/s (78.0MB/s), 23.5MiB/s-27.2MiB/s (24.7MB/s-28.5MB/s), io=748MiB (784MB), run=10046-10048msec 00:27:54.254 00:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:54.254 00:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:54.254 00:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:54.254 00:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:54.254 00:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:54.255 00:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:54.255 00:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.255 00:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:54.255 00:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.255 00:43:18 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:54.255 00:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.255 00:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:54.255 00:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.255 00:27:54.255 real 0m11.283s 00:27:54.255 user 0m29.021s 00:27:54.255 sys 0m2.446s 00:27:54.255 00:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:54.255 00:43:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:54.255 ************************************ 00:27:54.255 END TEST fio_dif_digest 00:27:54.255 ************************************ 00:27:54.255 00:43:18 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:54.255 00:43:18 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:54.255 00:43:18 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:54.255 00:43:18 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:54.255 00:43:18 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:54.255 00:43:18 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:54.255 00:43:18 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:54.255 00:43:18 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:54.255 rmmod nvme_tcp 00:27:54.255 rmmod nvme_fabrics 00:27:54.255 rmmod nvme_keyring 00:27:54.255 00:43:19 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:54.255 00:43:19 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:54.255 00:43:19 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:54.255 00:43:19 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1005125 ']' 00:27:54.255 00:43:19 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1005125 00:27:54.255 00:43:19 nvmf_dif -- common/autotest_common.sh@947 -- # '[' -z 1005125 ']' 00:27:54.255 00:43:19 nvmf_dif -- common/autotest_common.sh@951 -- # kill -0 1005125 00:27:54.255 00:43:19 nvmf_dif -- common/autotest_common.sh@952 -- # uname 00:27:54.255 00:43:19 nvmf_dif -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:54.255 00:43:19 nvmf_dif -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1005125 00:27:54.255 00:43:19 nvmf_dif -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:54.255 00:43:19 nvmf_dif -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:54.255 00:43:19 nvmf_dif -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1005125' 00:27:54.255 killing process with pid 1005125 00:27:54.255 00:43:19 nvmf_dif -- common/autotest_common.sh@966 -- # kill 1005125 00:27:54.255 [2024-05-15 00:43:19.047250] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:54.255 00:43:19 nvmf_dif -- common/autotest_common.sh@971 -- # wait 1005125 00:27:54.255 00:43:19 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:54.255 00:43:19 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:54.513 Waiting for block devices as requested 00:27:54.513 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:54.513 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:54.513 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:54.772 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:54.772 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:54.772 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:54.772 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:55.030 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:55.030 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:55.030 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:55.030 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:55.289 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:55.289 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:55.289 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:55.289 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:55.289 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:55.547 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:55.547 00:43:21 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:55.547 00:43:21 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:55.547 00:43:21 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:55.547 00:43:21 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:55.547 00:43:21 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.547 00:43:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:55.547 00:43:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.086 00:43:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:58.086 00:27:58.086 real 1m8.400s 00:27:58.086 user 6m30.651s 00:27:58.086 sys 0m19.391s 00:27:58.086 00:43:23 nvmf_dif -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:58.086 00:43:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:58.086 ************************************ 00:27:58.086 END TEST nvmf_dif 00:27:58.086 ************************************ 00:27:58.086 00:43:23 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:58.086 00:43:23 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:58.086 00:43:23 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:58.086 00:43:23 -- common/autotest_common.sh@10 -- # set +x 00:27:58.086 ************************************ 00:27:58.086 START TEST nvmf_abort_qd_sizes 00:27:58.086 ************************************ 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:58.086 * Looking for test storage... 00:27:58.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:58.086 00:43:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:59.985 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:59.985 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:59.985 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:59.985 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.985 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.986 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.986 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.986 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:59.986 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.244 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.244 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.244 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:28:00.244 00:28:00.244 --- 10.0.0.2 ping statistics --- 00:28:00.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.244 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:28:00.244 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:28:00.244 00:28:00.244 --- 10.0.0.1 ping statistics --- 00:28:00.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.244 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:28:00.244 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.244 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:00.244 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:00.244 00:43:26 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:01.617 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:01.617 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:01.617 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:01.617 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:01.617 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:01.617 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:01.617 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:01.617 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:01.617 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:01.617 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:01.617 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:01.617 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:01.617 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:01.617 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:01.617 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:01.617 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:02.553 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1016762 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1016762 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # '[' -z 1016762 ']' 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:02.553 00:43:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:02.811 [2024-05-15 00:43:28.759130] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:28:02.811 [2024-05-15 00:43:28.759224] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.811 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.811 [2024-05-15 00:43:28.834760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:02.811 [2024-05-15 00:43:28.946299] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.811 [2024-05-15 00:43:28.946356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.811 [2024-05-15 00:43:28.946369] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.811 [2024-05-15 00:43:28.946380] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.811 [2024-05-15 00:43:28.946390] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.811 [2024-05-15 00:43:28.946441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.811 [2024-05-15 00:43:28.946499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.811 [2024-05-15 00:43:28.946566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.811 [2024-05-15 00:43:28.946569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # return 0 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:03.743 00:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:03.743 ************************************ 00:28:03.743 START TEST spdk_target_abort 00:28:03.743 ************************************ 00:28:03.743 00:43:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # spdk_target 00:28:03.743 00:43:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:03.743 00:43:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:28:03.743 00:43:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.743 00:43:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:07.019 spdk_targetn1 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:07.019 [2024-05-15 00:43:32.611262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:07.019 [2024-05-15 00:43:32.643265] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:07.019 [2024-05-15 00:43:32.643541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:07.019 00:43:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:07.019 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.295 Initializing NVMe Controllers 00:28:10.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:10.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:10.295 Initialization complete. Launching workers. 00:28:10.295 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8655, failed: 0 00:28:10.295 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1210, failed to submit 7445 00:28:10.295 success 770, unsuccess 440, failed 0 00:28:10.295 00:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:10.295 00:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:10.295 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.607 Initializing NVMe Controllers 00:28:13.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:13.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:13.607 Initialization complete. Launching workers. 00:28:13.607 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8755, failed: 0 00:28:13.607 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1274, failed to submit 7481 00:28:13.607 success 287, unsuccess 987, failed 0 00:28:13.607 00:43:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:13.607 00:43:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:13.607 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.884 Initializing NVMe Controllers 00:28:16.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:16.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:16.884 Initialization complete. Launching workers. 00:28:16.884 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30776, failed: 0 00:28:16.884 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2766, failed to submit 28010 00:28:16.884 success 531, unsuccess 2235, failed 0 00:28:16.884 00:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:16.884 00:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:16.884 00:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:16.884 00:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:16.884 00:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:16.884 00:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:16.884 00:43:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:17.816 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:17.816 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1016762 00:28:17.816 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' -z 1016762 ']' 00:28:17.816 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # kill -0 1016762 00:28:17.816 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # uname 00:28:17.816 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:17.816 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1016762 00:28:17.816 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:17.816 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:17.816 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1016762' 00:28:17.816 killing process with pid 1016762 00:28:17.816 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # kill 1016762 00:28:17.816 [2024-05-15 00:43:43.698057] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:17.817 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # wait 1016762 00:28:18.075 00:28:18.075 real 0m14.217s 00:28:18.075 user 0m55.281s 00:28:18.075 sys 0m3.075s 00:28:18.075 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:18.075 00:43:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:18.075 ************************************ 00:28:18.075 END TEST spdk_target_abort 00:28:18.075 ************************************ 00:28:18.075 00:43:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:18.075 00:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:18.075 00:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:18.075 00:43:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:18.075 ************************************ 00:28:18.075 START TEST kernel_target_abort 00:28:18.075 ************************************ 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # kernel_target 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:18.075 00:43:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:19.449 Waiting for block devices as requested 00:28:19.449 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:19.449 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:19.449 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:19.449 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:19.708 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:19.708 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:19.708 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:19.708 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:19.708 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:19.966 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:19.966 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:19.966 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:20.223 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:20.223 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:20.223 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:20.223 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:20.479 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:20.479 No valid GPT data, bailing 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:20.479 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:28:20.480 00:28:20.480 Discovery Log Number of Records 2, Generation counter 2 00:28:20.480 =====Discovery Log Entry 0====== 00:28:20.480 trtype: tcp 00:28:20.480 adrfam: ipv4 00:28:20.480 subtype: current discovery subsystem 00:28:20.480 treq: not specified, sq flow control disable supported 00:28:20.480 portid: 1 00:28:20.480 trsvcid: 4420 00:28:20.480 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:20.480 traddr: 10.0.0.1 00:28:20.480 eflags: none 00:28:20.480 sectype: none 00:28:20.480 =====Discovery Log Entry 1====== 00:28:20.480 trtype: tcp 00:28:20.480 adrfam: ipv4 00:28:20.480 subtype: nvme subsystem 00:28:20.480 treq: not specified, sq flow control disable supported 00:28:20.480 portid: 1 00:28:20.480 trsvcid: 4420 00:28:20.480 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:20.480 traddr: 10.0.0.1 00:28:20.480 eflags: none 00:28:20.480 sectype: none 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:20.480 00:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:20.480 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.757 Initializing NVMe Controllers 00:28:23.757 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:23.757 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:23.757 Initialization complete. Launching workers. 00:28:23.757 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27904, failed: 0 00:28:23.757 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27904, failed to submit 0 00:28:23.757 success 0, unsuccess 27904, failed 0 00:28:23.757 00:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:23.757 00:43:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:23.757 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.034 Initializing NVMe Controllers 00:28:27.034 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:27.034 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:27.034 Initialization complete. Launching workers. 00:28:27.034 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54797, failed: 0 00:28:27.034 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13798, failed to submit 40999 00:28:27.035 success 0, unsuccess 13798, failed 0 00:28:27.035 00:43:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:27.035 00:43:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.035 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.312 Initializing NVMe Controllers 00:28:30.312 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:30.312 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:30.312 Initialization complete. Launching workers. 00:28:30.312 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53737, failed: 0 00:28:30.312 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13390, failed to submit 40347 00:28:30.312 success 0, unsuccess 13390, failed 0 00:28:30.312 00:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:30.312 00:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:30.312 00:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:28:30.312 00:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:30.312 00:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:30.312 00:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:30.312 00:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:30.312 00:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:30.312 00:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:30.312 00:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:31.246 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:31.246 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:31.246 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:31.246 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:31.246 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:31.246 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:31.246 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:31.246 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:31.246 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:31.246 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:31.246 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:31.246 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:31.246 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:31.246 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:31.246 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:31.246 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:32.182 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:28:32.182 00:28:32.182 real 0m14.262s 00:28:32.182 user 0m4.474s 00:28:32.182 sys 0m3.461s 00:28:32.182 00:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:32.182 00:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.182 ************************************ 00:28:32.182 END TEST kernel_target_abort 00:28:32.182 ************************************ 00:28:32.182 00:43:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:32.182 00:43:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:32.182 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:32.182 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:32.182 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:32.182 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:32.182 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:32.182 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:32.182 rmmod nvme_tcp 00:28:32.440 rmmod nvme_fabrics 00:28:32.440 rmmod nvme_keyring 00:28:32.440 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:32.440 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:32.440 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:32.440 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1016762 ']' 00:28:32.440 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1016762 00:28:32.441 00:43:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@947 -- # '[' -z 1016762 ']' 00:28:32.441 00:43:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # kill -0 1016762 00:28:32.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (1016762) - No such process 00:28:32.441 00:43:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@974 -- # echo 'Process with pid 1016762 is not found' 00:28:32.441 Process with pid 1016762 is not found 00:28:32.441 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:32.441 00:43:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:33.409 Waiting for block devices as requested 00:28:33.409 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:33.668 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:33.668 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:33.668 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:33.668 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:33.927 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:33.927 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:33.927 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:33.927 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:34.184 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:34.184 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:34.184 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:34.184 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:34.442 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:34.442 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:34.442 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:34.442 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:34.700 00:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:34.700 00:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:34.700 00:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:34.700 00:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:34.700 00:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.700 00:44:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:34.700 00:44:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.600 00:44:02 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:36.600 00:28:36.600 real 0m39.056s 00:28:36.600 user 1m2.223s 00:28:36.600 sys 0m10.280s 00:28:36.600 00:44:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:36.600 00:44:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:36.600 ************************************ 00:28:36.600 END TEST nvmf_abort_qd_sizes 00:28:36.600 ************************************ 00:28:36.600 00:44:02 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:36.600 00:44:02 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:36.600 00:44:02 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:36.600 00:44:02 -- common/autotest_common.sh@10 -- # set +x 00:28:36.859 ************************************ 00:28:36.859 START TEST keyring_file 00:28:36.859 ************************************ 00:28:36.859 00:44:02 keyring_file -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:36.859 * Looking for test storage... 00:28:36.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:36.859 00:44:02 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.859 00:44:02 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.859 00:44:02 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.859 00:44:02 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.859 00:44:02 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.859 00:44:02 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.859 00:44:02 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:36.859 00:44:02 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xHS6gU0iIN 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xHS6gU0iIN 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xHS6gU0iIN 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.xHS6gU0iIN 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YikgJ0QpoN 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:36.859 00:44:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YikgJ0QpoN 00:28:36.859 00:44:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YikgJ0QpoN 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.YikgJ0QpoN 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@30 -- # tgtpid=1022948 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:36.859 00:44:02 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1022948 00:28:36.859 00:44:02 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 1022948 ']' 00:28:36.859 00:44:02 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.859 00:44:02 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:36.859 00:44:02 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.859 00:44:02 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:36.859 00:44:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:36.859 [2024-05-15 00:44:02.991641] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:28:36.859 [2024-05-15 00:44:02.991741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022948 ] 00:28:37.117 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.117 [2024-05-15 00:44:03.063038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.117 [2024-05-15 00:44:03.183810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:28:38.051 00:44:03 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:38.051 [2024-05-15 00:44:03.944760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.051 null0 00:28:38.051 [2024-05-15 00:44:03.976756] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:38.051 [2024-05-15 00:44:03.976823] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:38.051 [2024-05-15 00:44:03.977306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:38.051 [2024-05-15 00:44:03.984796] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:38.051 00:44:03 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:38.051 [2024-05-15 00:44:03.992814] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:38.051 request: 00:28:38.051 { 00:28:38.051 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:38.051 "secure_channel": false, 00:28:38.051 "listen_address": { 00:28:38.051 "trtype": "tcp", 00:28:38.051 "traddr": "127.0.0.1", 00:28:38.051 "trsvcid": "4420" 00:28:38.051 }, 00:28:38.051 "method": "nvmf_subsystem_add_listener", 00:28:38.051 "req_id": 1 00:28:38.051 } 00:28:38.051 Got JSON-RPC error response 00:28:38.051 response: 00:28:38.051 { 00:28:38.051 "code": -32602, 00:28:38.051 "message": "Invalid parameters" 00:28:38.051 } 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:38.051 00:44:03 keyring_file -- keyring/file.sh@46 -- # bperfpid=1023073 00:28:38.051 00:44:03 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:38.051 00:44:03 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1023073 /var/tmp/bperf.sock 00:28:38.051 00:44:03 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 1023073 ']' 00:28:38.052 00:44:03 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:38.052 00:44:03 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:38.052 00:44:03 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:38.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:38.052 00:44:03 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:38.052 00:44:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:38.052 [2024-05-15 00:44:04.041062] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:28:38.052 [2024-05-15 00:44:04.041134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023073 ] 00:28:38.052 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.052 [2024-05-15 00:44:04.110583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.310 [2024-05-15 00:44:04.225894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.310 00:44:04 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:38.310 00:44:04 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:28:38.310 00:44:04 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xHS6gU0iIN 00:28:38.310 00:44:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xHS6gU0iIN 00:28:38.568 00:44:04 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YikgJ0QpoN 00:28:38.568 00:44:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YikgJ0QpoN 00:28:38.825 00:44:04 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:38.825 00:44:04 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:38.825 00:44:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:38.825 00:44:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:38.825 00:44:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:39.082 00:44:05 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.xHS6gU0iIN == \/\t\m\p\/\t\m\p\.\x\H\S\6\g\U\0\i\I\N ]] 00:28:39.082 00:44:05 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:39.082 00:44:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:39.082 00:44:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:39.082 00:44:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:39.082 00:44:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:39.340 00:44:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.YikgJ0QpoN == \/\t\m\p\/\t\m\p\.\Y\i\k\g\J\0\Q\p\o\N ]] 00:28:39.340 00:44:05 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:39.340 00:44:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:39.340 00:44:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:39.340 00:44:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:39.340 00:44:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:39.340 00:44:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:39.597 00:44:05 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:39.597 00:44:05 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:39.597 00:44:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:39.597 00:44:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:39.598 00:44:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:39.598 00:44:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:39.598 00:44:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:39.855 00:44:05 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:39.855 00:44:05 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:39.855 00:44:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:39.855 [2024-05-15 00:44:06.012296] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:40.113 nvme0n1 00:28:40.113 00:44:06 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:40.113 00:44:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:40.113 00:44:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:40.113 00:44:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:40.113 00:44:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:40.113 00:44:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:40.370 00:44:06 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:40.370 00:44:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:40.371 00:44:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:40.371 00:44:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:40.371 00:44:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:40.371 00:44:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:40.371 00:44:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:40.629 00:44:06 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:40.629 00:44:06 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.629 Running I/O for 1 seconds... 00:28:42.001 00:28:42.001 Latency(us) 00:28:42.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.001 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:42.001 nvme0n1 : 1.03 4167.41 16.28 0.00 0.00 30426.68 8883.77 39807.05 00:28:42.001 =================================================================================================================== 00:28:42.001 Total : 4167.41 16.28 0.00 0.00 30426.68 8883.77 39807.05 00:28:42.001 0 00:28:42.001 00:44:07 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:42.001 00:44:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:42.001 00:44:08 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:42.001 00:44:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:42.001 00:44:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:42.001 00:44:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:42.001 00:44:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:42.001 00:44:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:42.259 00:44:08 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:42.259 00:44:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:42.259 00:44:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:42.259 00:44:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:42.259 00:44:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:42.259 00:44:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:42.259 00:44:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:42.517 00:44:08 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:42.517 00:44:08 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:42.517 00:44:08 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:28:42.517 00:44:08 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:42.517 00:44:08 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:28:42.517 00:44:08 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:42.517 00:44:08 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:28:42.517 00:44:08 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:42.517 00:44:08 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:42.517 00:44:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:42.774 [2024-05-15 00:44:08.742887] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:42.774 [2024-05-15 00:44:08.743327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ccf30 (107): Transport endpoint is not connected 00:28:42.774 [2024-05-15 00:44:08.744305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ccf30 (9): Bad file descriptor 00:28:42.774 [2024-05-15 00:44:08.745303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:42.774 [2024-05-15 00:44:08.745327] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:42.774 [2024-05-15 00:44:08.745359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:42.774 request: 00:28:42.774 { 00:28:42.774 "name": "nvme0", 00:28:42.774 "trtype": "tcp", 00:28:42.774 "traddr": "127.0.0.1", 00:28:42.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:42.774 "adrfam": "ipv4", 00:28:42.774 "trsvcid": "4420", 00:28:42.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:42.774 "psk": "key1", 00:28:42.774 "method": "bdev_nvme_attach_controller", 00:28:42.774 "req_id": 1 00:28:42.774 } 00:28:42.774 Got JSON-RPC error response 00:28:42.774 response: 00:28:42.774 { 00:28:42.774 "code": -32602, 00:28:42.774 "message": "Invalid parameters" 00:28:42.774 } 00:28:42.774 00:44:08 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:28:42.774 00:44:08 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:42.774 00:44:08 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:42.774 00:44:08 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:42.774 00:44:08 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:42.774 00:44:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:42.774 00:44:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:42.774 00:44:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:42.774 00:44:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:42.774 00:44:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:43.032 00:44:09 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:43.032 00:44:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:43.032 00:44:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:43.032 00:44:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:43.032 00:44:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:43.032 00:44:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:43.032 00:44:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:43.290 00:44:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:43.290 00:44:09 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:43.290 00:44:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:43.548 00:44:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:43.548 00:44:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:43.805 00:44:09 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:43.805 00:44:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:43.805 00:44:09 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:44.063 00:44:10 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:44.063 00:44:10 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.xHS6gU0iIN 00:28:44.063 00:44:10 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.xHS6gU0iIN 00:28:44.063 00:44:10 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:28:44.063 00:44:10 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.xHS6gU0iIN 00:28:44.063 00:44:10 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:28:44.063 00:44:10 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:44.063 00:44:10 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:28:44.063 00:44:10 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:44.063 00:44:10 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xHS6gU0iIN 00:28:44.063 00:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xHS6gU0iIN 00:28:44.320 [2024-05-15 00:44:10.242117] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xHS6gU0iIN': 0100660 00:28:44.320 [2024-05-15 00:44:10.242153] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:44.320 request: 00:28:44.320 { 00:28:44.320 "name": "key0", 00:28:44.320 "path": "/tmp/tmp.xHS6gU0iIN", 00:28:44.320 "method": "keyring_file_add_key", 00:28:44.320 "req_id": 1 00:28:44.320 } 00:28:44.320 Got JSON-RPC error response 00:28:44.320 response: 00:28:44.320 { 00:28:44.320 "code": -1, 00:28:44.320 "message": "Operation not permitted" 00:28:44.320 } 00:28:44.320 00:44:10 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:28:44.320 00:44:10 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:44.320 00:44:10 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:44.320 00:44:10 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:44.320 00:44:10 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.xHS6gU0iIN 00:28:44.320 00:44:10 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xHS6gU0iIN 00:28:44.320 00:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xHS6gU0iIN 00:28:44.578 00:44:10 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.xHS6gU0iIN 00:28:44.578 00:44:10 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:44.578 00:44:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:44.578 00:44:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:44.578 00:44:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:44.578 00:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:44.578 00:44:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:44.836 00:44:10 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:44.836 00:44:10 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:44.836 00:44:10 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:28:44.836 00:44:10 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:44.836 00:44:10 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:28:44.836 00:44:10 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:44.836 00:44:10 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:28:44.836 00:44:10 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:44.836 00:44:10 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:44.836 00:44:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:44.836 [2024-05-15 00:44:10.984179] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.xHS6gU0iIN': No such file or directory 00:28:44.836 [2024-05-15 00:44:10.984235] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:44.836 [2024-05-15 00:44:10.984270] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:44.836 [2024-05-15 00:44:10.984296] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:44.836 [2024-05-15 00:44:10.984307] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:44.836 request: 00:28:44.836 { 00:28:44.836 "name": "nvme0", 00:28:44.836 "trtype": "tcp", 00:28:44.836 "traddr": "127.0.0.1", 00:28:44.836 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:44.836 "adrfam": "ipv4", 00:28:44.836 "trsvcid": "4420", 00:28:44.836 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:44.836 "psk": "key0", 00:28:44.836 "method": "bdev_nvme_attach_controller", 00:28:44.836 "req_id": 1 00:28:44.836 } 00:28:44.836 Got JSON-RPC error response 00:28:44.836 response: 00:28:44.836 { 00:28:44.836 "code": -19, 00:28:44.836 "message": "No such device" 00:28:44.836 } 00:28:45.094 00:44:11 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:28:45.094 00:44:11 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:45.094 00:44:11 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:45.094 00:44:11 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:45.094 00:44:11 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:45.094 00:44:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:45.094 00:44:11 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:45.094 00:44:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:45.094 00:44:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:45.094 00:44:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:45.094 00:44:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:45.094 00:44:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:45.094 00:44:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oeUHS9XYnZ 00:28:45.094 00:44:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:45.094 00:44:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:45.094 00:44:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:45.094 00:44:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:45.094 00:44:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:45.094 00:44:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:45.094 00:44:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:45.352 00:44:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oeUHS9XYnZ 00:28:45.352 00:44:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oeUHS9XYnZ 00:28:45.352 00:44:11 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.oeUHS9XYnZ 00:28:45.352 00:44:11 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oeUHS9XYnZ 00:28:45.352 00:44:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oeUHS9XYnZ 00:28:45.609 00:44:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:45.609 00:44:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:45.867 nvme0n1 00:28:45.867 00:44:11 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:45.867 00:44:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:45.867 00:44:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:45.867 00:44:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:45.867 00:44:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:45.867 00:44:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:46.125 00:44:12 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:46.125 00:44:12 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:46.125 00:44:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:46.383 00:44:12 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:46.383 00:44:12 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:46.383 00:44:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:46.383 00:44:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:46.383 00:44:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:46.640 00:44:12 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:46.640 00:44:12 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:46.640 00:44:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:46.640 00:44:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:46.640 00:44:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:46.640 00:44:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:46.640 00:44:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:46.898 00:44:12 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:46.898 00:44:12 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:46.898 00:44:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:47.155 00:44:13 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:47.155 00:44:13 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:47.155 00:44:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:47.424 00:44:13 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:47.424 00:44:13 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oeUHS9XYnZ 00:28:47.424 00:44:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oeUHS9XYnZ 00:28:47.712 00:44:13 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YikgJ0QpoN 00:28:47.712 00:44:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YikgJ0QpoN 00:28:47.712 00:44:13 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:47.712 00:44:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:47.969 nvme0n1 00:28:48.226 00:44:14 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:48.226 00:44:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:48.484 00:44:14 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:48.484 "subsystems": [ 00:28:48.484 { 00:28:48.484 "subsystem": "keyring", 00:28:48.484 "config": [ 00:28:48.484 { 00:28:48.484 "method": "keyring_file_add_key", 00:28:48.484 "params": { 00:28:48.484 "name": "key0", 00:28:48.484 "path": "/tmp/tmp.oeUHS9XYnZ" 00:28:48.484 } 00:28:48.484 }, 00:28:48.484 { 00:28:48.484 "method": "keyring_file_add_key", 00:28:48.484 "params": { 00:28:48.484 "name": "key1", 00:28:48.484 "path": "/tmp/tmp.YikgJ0QpoN" 00:28:48.484 } 00:28:48.484 } 00:28:48.484 ] 00:28:48.484 }, 00:28:48.484 { 00:28:48.484 "subsystem": "iobuf", 00:28:48.484 "config": [ 00:28:48.484 { 00:28:48.484 "method": "iobuf_set_options", 00:28:48.484 "params": { 00:28:48.484 "small_pool_count": 8192, 00:28:48.484 "large_pool_count": 1024, 00:28:48.485 "small_bufsize": 8192, 00:28:48.485 "large_bufsize": 135168 00:28:48.485 } 00:28:48.485 } 00:28:48.485 ] 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "subsystem": "sock", 00:28:48.485 "config": [ 00:28:48.485 { 00:28:48.485 "method": "sock_impl_set_options", 00:28:48.485 "params": { 00:28:48.485 "impl_name": "posix", 00:28:48.485 "recv_buf_size": 2097152, 00:28:48.485 "send_buf_size": 2097152, 00:28:48.485 "enable_recv_pipe": true, 00:28:48.485 "enable_quickack": false, 00:28:48.485 "enable_placement_id": 0, 00:28:48.485 "enable_zerocopy_send_server": true, 00:28:48.485 "enable_zerocopy_send_client": false, 00:28:48.485 "zerocopy_threshold": 0, 00:28:48.485 "tls_version": 0, 00:28:48.485 "enable_ktls": false 00:28:48.485 } 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "method": "sock_impl_set_options", 00:28:48.485 "params": { 00:28:48.485 "impl_name": "ssl", 00:28:48.485 "recv_buf_size": 4096, 00:28:48.485 "send_buf_size": 4096, 00:28:48.485 "enable_recv_pipe": true, 00:28:48.485 "enable_quickack": false, 00:28:48.485 "enable_placement_id": 0, 00:28:48.485 "enable_zerocopy_send_server": true, 00:28:48.485 "enable_zerocopy_send_client": false, 00:28:48.485 "zerocopy_threshold": 0, 00:28:48.485 "tls_version": 0, 00:28:48.485 "enable_ktls": false 00:28:48.485 } 00:28:48.485 } 00:28:48.485 ] 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "subsystem": "vmd", 00:28:48.485 "config": [] 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "subsystem": "accel", 00:28:48.485 "config": [ 00:28:48.485 { 00:28:48.485 "method": "accel_set_options", 00:28:48.485 "params": { 00:28:48.485 "small_cache_size": 128, 00:28:48.485 "large_cache_size": 16, 00:28:48.485 "task_count": 2048, 00:28:48.485 "sequence_count": 2048, 00:28:48.485 "buf_count": 2048 00:28:48.485 } 00:28:48.485 } 00:28:48.485 ] 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "subsystem": "bdev", 00:28:48.485 "config": [ 00:28:48.485 { 00:28:48.485 "method": "bdev_set_options", 00:28:48.485 "params": { 00:28:48.485 "bdev_io_pool_size": 65535, 00:28:48.485 "bdev_io_cache_size": 256, 00:28:48.485 "bdev_auto_examine": true, 00:28:48.485 "iobuf_small_cache_size": 128, 00:28:48.485 "iobuf_large_cache_size": 16 00:28:48.485 } 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "method": "bdev_raid_set_options", 00:28:48.485 "params": { 00:28:48.485 "process_window_size_kb": 1024 00:28:48.485 } 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "method": "bdev_iscsi_set_options", 00:28:48.485 "params": { 00:28:48.485 "timeout_sec": 30 00:28:48.485 } 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "method": "bdev_nvme_set_options", 00:28:48.485 "params": { 00:28:48.485 "action_on_timeout": "none", 00:28:48.485 "timeout_us": 0, 00:28:48.485 "timeout_admin_us": 0, 00:28:48.485 "keep_alive_timeout_ms": 10000, 00:28:48.485 "arbitration_burst": 0, 00:28:48.485 "low_priority_weight": 0, 00:28:48.485 "medium_priority_weight": 0, 00:28:48.485 "high_priority_weight": 0, 00:28:48.485 "nvme_adminq_poll_period_us": 10000, 00:28:48.485 "nvme_ioq_poll_period_us": 0, 00:28:48.485 "io_queue_requests": 512, 00:28:48.485 "delay_cmd_submit": true, 00:28:48.485 "transport_retry_count": 4, 00:28:48.485 "bdev_retry_count": 3, 00:28:48.485 "transport_ack_timeout": 0, 00:28:48.485 "ctrlr_loss_timeout_sec": 0, 00:28:48.485 "reconnect_delay_sec": 0, 00:28:48.485 "fast_io_fail_timeout_sec": 0, 00:28:48.485 "disable_auto_failback": false, 00:28:48.485 "generate_uuids": false, 00:28:48.485 "transport_tos": 0, 00:28:48.485 "nvme_error_stat": false, 00:28:48.485 "rdma_srq_size": 0, 00:28:48.485 "io_path_stat": false, 00:28:48.485 "allow_accel_sequence": false, 00:28:48.485 "rdma_max_cq_size": 0, 00:28:48.485 "rdma_cm_event_timeout_ms": 0, 00:28:48.485 "dhchap_digests": [ 00:28:48.485 "sha256", 00:28:48.485 "sha384", 00:28:48.485 "sha512" 00:28:48.485 ], 00:28:48.485 "dhchap_dhgroups": [ 00:28:48.485 "null", 00:28:48.485 "ffdhe2048", 00:28:48.485 "ffdhe3072", 00:28:48.485 "ffdhe4096", 00:28:48.485 "ffdhe6144", 00:28:48.485 "ffdhe8192" 00:28:48.485 ] 00:28:48.485 } 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "method": "bdev_nvme_attach_controller", 00:28:48.485 "params": { 00:28:48.485 "name": "nvme0", 00:28:48.485 "trtype": "TCP", 00:28:48.485 "adrfam": "IPv4", 00:28:48.485 "traddr": "127.0.0.1", 00:28:48.485 "trsvcid": "4420", 00:28:48.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:48.485 "prchk_reftag": false, 00:28:48.485 "prchk_guard": false, 00:28:48.485 "ctrlr_loss_timeout_sec": 0, 00:28:48.485 "reconnect_delay_sec": 0, 00:28:48.485 "fast_io_fail_timeout_sec": 0, 00:28:48.485 "psk": "key0", 00:28:48.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:48.485 "hdgst": false, 00:28:48.485 "ddgst": false 00:28:48.485 } 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "method": "bdev_nvme_set_hotplug", 00:28:48.485 "params": { 00:28:48.485 "period_us": 100000, 00:28:48.485 "enable": false 00:28:48.485 } 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "method": "bdev_wait_for_examine" 00:28:48.485 } 00:28:48.485 ] 00:28:48.485 }, 00:28:48.485 { 00:28:48.485 "subsystem": "nbd", 00:28:48.485 "config": [] 00:28:48.485 } 00:28:48.485 ] 00:28:48.485 }' 00:28:48.485 00:44:14 keyring_file -- keyring/file.sh@114 -- # killprocess 1023073 00:28:48.485 00:44:14 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 1023073 ']' 00:28:48.485 00:44:14 keyring_file -- common/autotest_common.sh@951 -- # kill -0 1023073 00:28:48.485 00:44:14 keyring_file -- common/autotest_common.sh@952 -- # uname 00:28:48.485 00:44:14 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:48.485 00:44:14 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1023073 00:28:48.485 00:44:14 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:48.485 00:44:14 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:48.485 00:44:14 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1023073' 00:28:48.485 killing process with pid 1023073 00:28:48.485 00:44:14 keyring_file -- common/autotest_common.sh@966 -- # kill 1023073 00:28:48.485 Received shutdown signal, test time was about 1.000000 seconds 00:28:48.485 00:28:48.485 Latency(us) 00:28:48.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.485 =================================================================================================================== 00:28:48.485 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:48.485 00:44:14 keyring_file -- common/autotest_common.sh@971 -- # wait 1023073 00:28:48.743 00:44:14 keyring_file -- keyring/file.sh@117 -- # bperfpid=1024419 00:28:48.743 00:44:14 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1024419 /var/tmp/bperf.sock 00:28:48.743 00:44:14 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 1024419 ']' 00:28:48.743 00:44:14 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:48.743 00:44:14 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:48.743 00:44:14 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:48.743 00:44:14 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:48.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:48.743 00:44:14 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:48.743 00:44:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:48.743 00:44:14 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:48.743 "subsystems": [ 00:28:48.743 { 00:28:48.743 "subsystem": "keyring", 00:28:48.743 "config": [ 00:28:48.743 { 00:28:48.743 "method": "keyring_file_add_key", 00:28:48.743 "params": { 00:28:48.743 "name": "key0", 00:28:48.743 "path": "/tmp/tmp.oeUHS9XYnZ" 00:28:48.743 } 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "method": "keyring_file_add_key", 00:28:48.743 "params": { 00:28:48.743 "name": "key1", 00:28:48.743 "path": "/tmp/tmp.YikgJ0QpoN" 00:28:48.743 } 00:28:48.743 } 00:28:48.743 ] 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "subsystem": "iobuf", 00:28:48.743 "config": [ 00:28:48.743 { 00:28:48.743 "method": "iobuf_set_options", 00:28:48.743 "params": { 00:28:48.743 "small_pool_count": 8192, 00:28:48.743 "large_pool_count": 1024, 00:28:48.743 "small_bufsize": 8192, 00:28:48.743 "large_bufsize": 135168 00:28:48.743 } 00:28:48.743 } 00:28:48.743 ] 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "subsystem": "sock", 00:28:48.743 "config": [ 00:28:48.743 { 00:28:48.743 "method": "sock_impl_set_options", 00:28:48.743 "params": { 00:28:48.743 "impl_name": "posix", 00:28:48.743 "recv_buf_size": 2097152, 00:28:48.743 "send_buf_size": 2097152, 00:28:48.743 "enable_recv_pipe": true, 00:28:48.743 "enable_quickack": false, 00:28:48.743 "enable_placement_id": 0, 00:28:48.743 "enable_zerocopy_send_server": true, 00:28:48.743 "enable_zerocopy_send_client": false, 00:28:48.743 "zerocopy_threshold": 0, 00:28:48.743 "tls_version": 0, 00:28:48.743 "enable_ktls": false 00:28:48.743 } 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "method": "sock_impl_set_options", 00:28:48.743 "params": { 00:28:48.743 "impl_name": "ssl", 00:28:48.743 "recv_buf_size": 4096, 00:28:48.743 "send_buf_size": 4096, 00:28:48.743 "enable_recv_pipe": true, 00:28:48.743 "enable_quickack": false, 00:28:48.743 "enable_placement_id": 0, 00:28:48.743 "enable_zerocopy_send_server": true, 00:28:48.743 "enable_zerocopy_send_client": false, 00:28:48.743 "zerocopy_threshold": 0, 00:28:48.743 "tls_version": 0, 00:28:48.743 "enable_ktls": false 00:28:48.743 } 00:28:48.743 } 00:28:48.743 ] 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "subsystem": "vmd", 00:28:48.743 "config": [] 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "subsystem": "accel", 00:28:48.743 "config": [ 00:28:48.743 { 00:28:48.743 "method": "accel_set_options", 00:28:48.743 "params": { 00:28:48.743 "small_cache_size": 128, 00:28:48.743 "large_cache_size": 16, 00:28:48.743 "task_count": 2048, 00:28:48.743 "sequence_count": 2048, 00:28:48.743 "buf_count": 2048 00:28:48.743 } 00:28:48.743 } 00:28:48.743 ] 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "subsystem": "bdev", 00:28:48.743 "config": [ 00:28:48.743 { 00:28:48.743 "method": "bdev_set_options", 00:28:48.743 "params": { 00:28:48.743 "bdev_io_pool_size": 65535, 00:28:48.743 "bdev_io_cache_size": 256, 00:28:48.743 "bdev_auto_examine": true, 00:28:48.743 "iobuf_small_cache_size": 128, 00:28:48.743 "iobuf_large_cache_size": 16 00:28:48.743 } 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "method": "bdev_raid_set_options", 00:28:48.743 "params": { 00:28:48.743 "process_window_size_kb": 1024 00:28:48.743 } 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "method": "bdev_iscsi_set_options", 00:28:48.743 "params": { 00:28:48.743 "timeout_sec": 30 00:28:48.743 } 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "method": "bdev_nvme_set_options", 00:28:48.743 "params": { 00:28:48.743 "action_on_timeout": "none", 00:28:48.743 "timeout_us": 0, 00:28:48.743 "timeout_admin_us": 0, 00:28:48.743 "keep_alive_timeout_ms": 10000, 00:28:48.743 "arbitration_burst": 0, 00:28:48.743 "low_priority_weight": 0, 00:28:48.743 "medium_priority_weight": 0, 00:28:48.743 "high_priority_weight": 0, 00:28:48.743 "nvme_adminq_poll_period_us": 10000, 00:28:48.743 "nvme_ioq_poll_period_us": 0, 00:28:48.743 "io_queue_requests": 512, 00:28:48.743 "delay_cmd_submit": true, 00:28:48.743 "transport_retry_count": 4, 00:28:48.743 "bdev_retry_count": 3, 00:28:48.743 "transport_ack_timeout": 0, 00:28:48.743 "ctrlr_loss_timeout_sec": 0, 00:28:48.743 "reconnect_delay_sec": 0, 00:28:48.743 "fast_io_fail_timeout_sec": 0, 00:28:48.743 "disable_auto_failback": false, 00:28:48.743 "generate_uuids": false, 00:28:48.743 "transport_tos": 0, 00:28:48.743 "nvme_error_stat": false, 00:28:48.743 "rdma_srq_size": 0, 00:28:48.743 "io_path_stat": false, 00:28:48.743 "allow_accel_sequence": false, 00:28:48.743 "rdma_max_cq_size": 0, 00:28:48.743 "rdma_cm_event_timeout_ms": 0, 00:28:48.743 "dhchap_digests": [ 00:28:48.743 "sha256", 00:28:48.743 "sha384", 00:28:48.743 "sha512" 00:28:48.743 ], 00:28:48.743 "dhchap_dhgroups": [ 00:28:48.743 "null", 00:28:48.743 "ffdhe2048", 00:28:48.743 "ffdhe3072", 00:28:48.743 "ffdhe4096", 00:28:48.743 "ffdhe6144", 00:28:48.743 "ffdhe8192" 00:28:48.743 ] 00:28:48.743 } 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "method": "bdev_nvme_attach_controller", 00:28:48.743 "params": { 00:28:48.743 "name": "nvme0", 00:28:48.743 "trtype": "TCP", 00:28:48.743 "adrfam": "IPv4", 00:28:48.743 "traddr": "127.0.0.1", 00:28:48.743 "trsvcid": "4420", 00:28:48.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:48.743 "prchk_reftag": false, 00:28:48.743 "prchk_guard": false, 00:28:48.743 "ctrlr_loss_timeout_sec": 0, 00:28:48.743 "reconnect_delay_sec": 0, 00:28:48.743 "fast_io_fail_timeout_sec": 0, 00:28:48.743 "psk": "key0", 00:28:48.743 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:48.743 "hdgst": false, 00:28:48.743 "ddgst": false 00:28:48.743 } 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "method": "bdev_nvme_set_hotplug", 00:28:48.743 "params": { 00:28:48.743 "period_us": 100000, 00:28:48.743 "enable": false 00:28:48.743 } 00:28:48.743 }, 00:28:48.743 { 00:28:48.743 "method": "bdev_wait_for_examine" 00:28:48.743 } 00:28:48.743 ] 00:28:48.744 }, 00:28:48.744 { 00:28:48.744 "subsystem": "nbd", 00:28:48.744 "config": [] 00:28:48.744 } 00:28:48.744 ] 00:28:48.744 }' 00:28:48.744 [2024-05-15 00:44:14.775613] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:28:48.744 [2024-05-15 00:44:14.775696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024419 ] 00:28:48.744 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.744 [2024-05-15 00:44:14.842333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.001 [2024-05-15 00:44:14.952138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.001 [2024-05-15 00:44:15.127389] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:49.566 00:44:15 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:49.566 00:44:15 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:28:49.566 00:44:15 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:49.566 00:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:49.566 00:44:15 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:49.824 00:44:15 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:49.824 00:44:15 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:49.824 00:44:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:49.824 00:44:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:49.824 00:44:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:49.824 00:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:49.824 00:44:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:50.081 00:44:16 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:50.081 00:44:16 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:50.081 00:44:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:50.081 00:44:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:50.081 00:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:50.081 00:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:50.081 00:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:50.339 00:44:16 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:50.339 00:44:16 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:50.339 00:44:16 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:50.339 00:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:50.596 00:44:16 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:50.597 00:44:16 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:50.597 00:44:16 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.oeUHS9XYnZ /tmp/tmp.YikgJ0QpoN 00:28:50.597 00:44:16 keyring_file -- keyring/file.sh@20 -- # killprocess 1024419 00:28:50.597 00:44:16 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 1024419 ']' 00:28:50.597 00:44:16 keyring_file -- common/autotest_common.sh@951 -- # kill -0 1024419 00:28:50.597 00:44:16 keyring_file -- common/autotest_common.sh@952 -- # uname 00:28:50.597 00:44:16 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:50.597 00:44:16 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1024419 00:28:50.597 00:44:16 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:50.597 00:44:16 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:50.597 00:44:16 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1024419' 00:28:50.597 killing process with pid 1024419 00:28:50.597 00:44:16 keyring_file -- common/autotest_common.sh@966 -- # kill 1024419 00:28:50.597 Received shutdown signal, test time was about 1.000000 seconds 00:28:50.597 00:28:50.597 Latency(us) 00:28:50.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.597 =================================================================================================================== 00:28:50.597 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:50.597 00:44:16 keyring_file -- common/autotest_common.sh@971 -- # wait 1024419 00:28:50.854 00:44:16 keyring_file -- keyring/file.sh@21 -- # killprocess 1022948 00:28:50.854 00:44:16 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 1022948 ']' 00:28:50.854 00:44:16 keyring_file -- common/autotest_common.sh@951 -- # kill -0 1022948 00:28:50.854 00:44:16 keyring_file -- common/autotest_common.sh@952 -- # uname 00:28:50.854 00:44:16 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:50.854 00:44:16 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1022948 00:28:50.854 00:44:17 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:50.854 00:44:17 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:50.854 00:44:17 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1022948' 00:28:50.854 killing process with pid 1022948 00:28:50.854 00:44:17 keyring_file -- common/autotest_common.sh@966 -- # kill 1022948 00:28:50.854 [2024-05-15 00:44:17.014164] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:50.854 [2024-05-15 00:44:17.014250] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:50.854 00:44:17 keyring_file -- common/autotest_common.sh@971 -- # wait 1022948 00:28:51.420 00:28:51.420 real 0m14.689s 00:28:51.420 user 0m35.359s 00:28:51.420 sys 0m3.293s 00:28:51.420 00:44:17 keyring_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:51.420 00:44:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:51.420 ************************************ 00:28:51.420 END TEST keyring_file 00:28:51.420 ************************************ 00:28:51.420 00:44:17 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:28:51.420 00:44:17 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:28:51.420 00:44:17 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:51.420 00:44:17 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:51.420 00:44:17 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:28:51.420 00:44:17 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:28:51.420 00:44:17 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:28:51.420 00:44:17 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:51.420 00:44:17 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:51.420 00:44:17 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:51.420 00:44:17 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:28:51.420 00:44:17 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:51.420 00:44:17 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:28:51.420 00:44:17 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:51.420 00:44:17 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:51.420 00:44:17 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:51.420 00:44:17 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:28:51.420 00:44:17 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:28:51.420 00:44:17 -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:51.420 00:44:17 -- common/autotest_common.sh@10 -- # set +x 00:28:51.420 00:44:17 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:28:51.420 00:44:17 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:28:51.420 00:44:17 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:28:51.420 00:44:17 -- common/autotest_common.sh@10 -- # set +x 00:28:53.324 INFO: APP EXITING 00:28:53.324 INFO: killing all VMs 00:28:53.324 INFO: killing vhost app 00:28:53.324 INFO: EXIT DONE 00:28:54.695 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:28:54.695 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:54.695 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:54.695 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:54.695 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:54.695 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:54.695 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:54.695 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:54.695 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:54.695 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:54.695 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:54.695 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:54.695 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:54.695 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:54.695 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:54.695 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:54.695 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:56.068 Cleaning 00:28:56.068 Removing: /var/run/dpdk/spdk0/config 00:28:56.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:56.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:56.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:56.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:56.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:56.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:56.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:56.068 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:56.068 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:56.068 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:56.068 Removing: /var/run/dpdk/spdk1/config 00:28:56.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:56.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:56.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:56.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:56.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:56.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:56.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:56.068 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:56.068 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:56.068 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:56.068 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:56.068 Removing: /var/run/dpdk/spdk2/config 00:28:56.068 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:56.068 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:56.068 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:56.068 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:56.068 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:56.068 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:56.068 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:56.068 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:56.068 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:56.068 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:56.068 Removing: /var/run/dpdk/spdk3/config 00:28:56.068 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:56.068 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:56.068 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:56.068 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:56.068 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:56.068 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:56.068 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:56.068 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:56.068 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:56.068 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:56.068 Removing: /var/run/dpdk/spdk4/config 00:28:56.068 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:56.068 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:56.068 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:56.068 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:56.068 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:56.068 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:56.068 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:56.068 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:56.068 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:56.068 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:56.068 Removing: /dev/shm/bdev_svc_trace.1 00:28:56.068 Removing: /dev/shm/nvmf_trace.0 00:28:56.068 Removing: /dev/shm/spdk_tgt_trace.pid745532 00:28:56.068 Removing: /var/run/dpdk/spdk0 00:28:56.068 Removing: /var/run/dpdk/spdk1 00:28:56.068 Removing: /var/run/dpdk/spdk2 00:28:56.068 Removing: /var/run/dpdk/spdk3 00:28:56.068 Removing: /var/run/dpdk/spdk4 00:28:56.068 Removing: /var/run/dpdk/spdk_pid1001821 00:28:56.068 Removing: /var/run/dpdk/spdk_pid1001829 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1005333 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1006636 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1008129 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1008904 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1010355 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1011189 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1017189 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1017583 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1017976 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1019511 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1019910 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1020310 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1022948 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1023073 00:28:56.327 Removing: /var/run/dpdk/spdk_pid1024419 00:28:56.327 Removing: /var/run/dpdk/spdk_pid743859 00:28:56.327 Removing: /var/run/dpdk/spdk_pid744601 00:28:56.327 Removing: /var/run/dpdk/spdk_pid745532 00:28:56.327 Removing: /var/run/dpdk/spdk_pid745973 00:28:56.327 Removing: /var/run/dpdk/spdk_pid746660 00:28:56.327 Removing: /var/run/dpdk/spdk_pid746815 00:28:56.327 Removing: /var/run/dpdk/spdk_pid747529 00:28:56.327 Removing: /var/run/dpdk/spdk_pid747667 00:28:56.327 Removing: /var/run/dpdk/spdk_pid747911 00:28:56.327 Removing: /var/run/dpdk/spdk_pid749188 00:28:56.327 Removing: /var/run/dpdk/spdk_pid750145 00:28:56.327 Removing: /var/run/dpdk/spdk_pid750444 00:28:56.327 Removing: /var/run/dpdk/spdk_pid750641 00:28:56.327 Removing: /var/run/dpdk/spdk_pid750858 00:28:56.327 Removing: /var/run/dpdk/spdk_pid751073 00:28:56.327 Removing: /var/run/dpdk/spdk_pid751321 00:28:56.327 Removing: /var/run/dpdk/spdk_pid751481 00:28:56.327 Removing: /var/run/dpdk/spdk_pid751666 00:28:56.327 Removing: /var/run/dpdk/spdk_pid752249 00:28:56.327 Removing: /var/run/dpdk/spdk_pid754596 00:28:56.327 Removing: /var/run/dpdk/spdk_pid754768 00:28:56.327 Removing: /var/run/dpdk/spdk_pid755047 00:28:56.327 Removing: /var/run/dpdk/spdk_pid755066 00:28:56.327 Removing: /var/run/dpdk/spdk_pid755490 00:28:56.327 Removing: /var/run/dpdk/spdk_pid755503 00:28:56.327 Removing: /var/run/dpdk/spdk_pid755924 00:28:56.327 Removing: /var/run/dpdk/spdk_pid755970 00:28:56.327 Removing: /var/run/dpdk/spdk_pid756270 00:28:56.327 Removing: /var/run/dpdk/spdk_pid756320 00:28:56.327 Removing: /var/run/dpdk/spdk_pid756536 00:28:56.327 Removing: /var/run/dpdk/spdk_pid756638 00:28:56.327 Removing: /var/run/dpdk/spdk_pid757009 00:28:56.327 Removing: /var/run/dpdk/spdk_pid757179 00:28:56.327 Removing: /var/run/dpdk/spdk_pid757905 00:28:56.327 Removing: /var/run/dpdk/spdk_pid758162 00:28:56.327 Removing: /var/run/dpdk/spdk_pid758183 00:28:56.327 Removing: /var/run/dpdk/spdk_pid758375 00:28:56.327 Removing: /var/run/dpdk/spdk_pid758532 00:28:56.327 Removing: /var/run/dpdk/spdk_pid758757 00:28:56.327 Removing: /var/run/dpdk/spdk_pid758967 00:28:56.327 Removing: /var/run/dpdk/spdk_pid759120 00:28:56.327 Removing: /var/run/dpdk/spdk_pid759396 00:28:56.327 Removing: /var/run/dpdk/spdk_pid759556 00:28:56.327 Removing: /var/run/dpdk/spdk_pid759713 00:28:56.327 Removing: /var/run/dpdk/spdk_pid759989 00:28:56.327 Removing: /var/run/dpdk/spdk_pid760151 00:28:56.327 Removing: /var/run/dpdk/spdk_pid760304 00:28:56.327 Removing: /var/run/dpdk/spdk_pid760584 00:28:56.327 Removing: /var/run/dpdk/spdk_pid760739 00:28:56.327 Removing: /var/run/dpdk/spdk_pid760898 00:28:56.327 Removing: /var/run/dpdk/spdk_pid761174 00:28:56.327 Removing: /var/run/dpdk/spdk_pid761333 00:28:56.327 Removing: /var/run/dpdk/spdk_pid761514 00:28:56.327 Removing: /var/run/dpdk/spdk_pid761770 00:28:56.327 Removing: /var/run/dpdk/spdk_pid761933 00:28:56.327 Removing: /var/run/dpdk/spdk_pid762202 00:28:56.327 Removing: /var/run/dpdk/spdk_pid762371 00:28:56.327 Removing: /var/run/dpdk/spdk_pid762550 00:28:56.327 Removing: /var/run/dpdk/spdk_pid762764 00:28:56.327 Removing: /var/run/dpdk/spdk_pid765235 00:28:56.327 Removing: /var/run/dpdk/spdk_pid794669 00:28:56.327 Removing: /var/run/dpdk/spdk_pid797690 00:28:56.327 Removing: /var/run/dpdk/spdk_pid805477 00:28:56.327 Removing: /var/run/dpdk/spdk_pid809059 00:28:56.327 Removing: /var/run/dpdk/spdk_pid811966 00:28:56.327 Removing: /var/run/dpdk/spdk_pid812369 00:28:56.327 Removing: /var/run/dpdk/spdk_pid820454 00:28:56.327 Removing: /var/run/dpdk/spdk_pid820458 00:28:56.327 Removing: /var/run/dpdk/spdk_pid820998 00:28:56.327 Removing: /var/run/dpdk/spdk_pid821655 00:28:56.327 Removing: /var/run/dpdk/spdk_pid822315 00:28:56.327 Removing: /var/run/dpdk/spdk_pid822710 00:28:56.327 Removing: /var/run/dpdk/spdk_pid822720 00:28:56.327 Removing: /var/run/dpdk/spdk_pid822862 00:28:56.327 Removing: /var/run/dpdk/spdk_pid822998 00:28:56.327 Removing: /var/run/dpdk/spdk_pid823004 00:28:56.327 Removing: /var/run/dpdk/spdk_pid823654 00:28:56.327 Removing: /var/run/dpdk/spdk_pid824306 00:28:56.327 Removing: /var/run/dpdk/spdk_pid824857 00:28:56.327 Removing: /var/run/dpdk/spdk_pid825257 00:28:56.327 Removing: /var/run/dpdk/spdk_pid825332 00:28:56.327 Removing: /var/run/dpdk/spdk_pid825516 00:28:56.327 Removing: /var/run/dpdk/spdk_pid826540 00:28:56.327 Removing: /var/run/dpdk/spdk_pid827265 00:28:56.327 Removing: /var/run/dpdk/spdk_pid833574 00:28:56.327 Removing: /var/run/dpdk/spdk_pid833804 00:28:56.327 Removing: /var/run/dpdk/spdk_pid836858 00:28:56.327 Removing: /var/run/dpdk/spdk_pid840973 00:28:56.327 Removing: /var/run/dpdk/spdk_pid843158 00:28:56.327 Removing: /var/run/dpdk/spdk_pid850399 00:28:56.327 Removing: /var/run/dpdk/spdk_pid856303 00:28:56.327 Removing: /var/run/dpdk/spdk_pid857498 00:28:56.327 Removing: /var/run/dpdk/spdk_pid858161 00:28:56.327 Removing: /var/run/dpdk/spdk_pid870095 00:28:56.327 Removing: /var/run/dpdk/spdk_pid872729 00:28:56.327 Removing: /var/run/dpdk/spdk_pid896979 00:28:56.327 Removing: /var/run/dpdk/spdk_pid900185 00:28:56.585 Removing: /var/run/dpdk/spdk_pid901366 00:28:56.585 Removing: /var/run/dpdk/spdk_pid902681 00:28:56.585 Removing: /var/run/dpdk/spdk_pid902770 00:28:56.585 Removing: /var/run/dpdk/spdk_pid902962 00:28:56.585 Removing: /var/run/dpdk/spdk_pid903104 00:28:56.585 Removing: /var/run/dpdk/spdk_pid903553 00:28:56.585 Removing: /var/run/dpdk/spdk_pid904867 00:28:56.585 Removing: /var/run/dpdk/spdk_pid905609 00:28:56.585 Removing: /var/run/dpdk/spdk_pid906034 00:28:56.585 Removing: /var/run/dpdk/spdk_pid907651 00:28:56.585 Removing: /var/run/dpdk/spdk_pid908185 00:28:56.585 Removing: /var/run/dpdk/spdk_pid908778 00:28:56.585 Removing: /var/run/dpdk/spdk_pid911582 00:28:56.585 Removing: /var/run/dpdk/spdk_pid918072 00:28:56.585 Removing: /var/run/dpdk/spdk_pid920836 00:28:56.585 Removing: /var/run/dpdk/spdk_pid925007 00:28:56.585 Removing: /var/run/dpdk/spdk_pid926594 00:28:56.585 Removing: /var/run/dpdk/spdk_pid927691 00:28:56.585 Removing: /var/run/dpdk/spdk_pid930534 00:28:56.585 Removing: /var/run/dpdk/spdk_pid933317 00:28:56.585 Removing: /var/run/dpdk/spdk_pid938232 00:28:56.585 Removing: /var/run/dpdk/spdk_pid938239 00:28:56.585 Removing: /var/run/dpdk/spdk_pid941457 00:28:56.585 Removing: /var/run/dpdk/spdk_pid941683 00:28:56.585 Removing: /var/run/dpdk/spdk_pid941818 00:28:56.585 Removing: /var/run/dpdk/spdk_pid942090 00:28:56.585 Removing: /var/run/dpdk/spdk_pid942101 00:28:56.585 Removing: /var/run/dpdk/spdk_pid945011 00:28:56.585 Removing: /var/run/dpdk/spdk_pid945434 00:28:56.585 Removing: /var/run/dpdk/spdk_pid948416 00:28:56.585 Removing: /var/run/dpdk/spdk_pid950403 00:28:56.586 Removing: /var/run/dpdk/spdk_pid954242 00:28:56.586 Removing: /var/run/dpdk/spdk_pid957848 00:28:56.586 Removing: /var/run/dpdk/spdk_pid964613 00:28:56.586 Removing: /var/run/dpdk/spdk_pid969964 00:28:56.586 Removing: /var/run/dpdk/spdk_pid969966 00:28:56.586 Removing: /var/run/dpdk/spdk_pid982978 00:28:56.586 Removing: /var/run/dpdk/spdk_pid983405 00:28:56.586 Removing: /var/run/dpdk/spdk_pid983938 00:28:56.586 Removing: /var/run/dpdk/spdk_pid984465 00:28:56.586 Removing: /var/run/dpdk/spdk_pid985185 00:28:56.586 Removing: /var/run/dpdk/spdk_pid985718 00:28:56.586 Removing: /var/run/dpdk/spdk_pid986258 00:28:56.586 Removing: /var/run/dpdk/spdk_pid986796 00:28:56.586 Removing: /var/run/dpdk/spdk_pid989579 00:28:56.586 Removing: /var/run/dpdk/spdk_pid989745 00:28:56.586 Removing: /var/run/dpdk/spdk_pid993927 00:28:56.586 Removing: /var/run/dpdk/spdk_pid994114 00:28:56.586 Removing: /var/run/dpdk/spdk_pid995722 00:28:56.586 Clean 00:28:56.586 00:44:22 -- common/autotest_common.sh@1448 -- # return 0 00:28:56.586 00:44:22 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:28:56.586 00:44:22 -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:56.586 00:44:22 -- common/autotest_common.sh@10 -- # set +x 00:28:56.586 00:44:22 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:28:56.586 00:44:22 -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:56.586 00:44:22 -- common/autotest_common.sh@10 -- # set +x 00:28:56.586 00:44:22 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:56.586 00:44:22 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:56.586 00:44:22 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:56.586 00:44:22 -- spdk/autotest.sh@387 -- # hash lcov 00:28:56.586 00:44:22 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:56.586 00:44:22 -- spdk/autotest.sh@389 -- # hostname 00:28:56.586 00:44:22 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:56.842 geninfo: WARNING: invalid characters removed from testname! 00:29:28.909 00:44:49 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:28.909 00:44:53 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:30.837 00:44:56 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:34.127 00:44:59 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:36.656 00:45:02 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:39.210 00:45:05 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:42.510 00:45:08 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:42.510 00:45:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.510 00:45:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:42.510 00:45:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.510 00:45:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.510 00:45:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.510 00:45:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.510 00:45:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.510 00:45:08 -- paths/export.sh@5 -- $ export PATH 00:29:42.510 00:45:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.510 00:45:08 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:42.510 00:45:08 -- common/autobuild_common.sh@437 -- $ date +%s 00:29:42.510 00:45:08 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715726708.XXXXXX 00:29:42.510 00:45:08 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715726708.0YALFI 00:29:42.510 00:45:08 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:29:42.510 00:45:08 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:29:42.510 00:45:08 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:42.510 00:45:08 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:42.510 00:45:08 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:42.510 00:45:08 -- common/autobuild_common.sh@453 -- $ get_config_params 00:29:42.510 00:45:08 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:42.510 00:45:08 -- common/autotest_common.sh@10 -- $ set +x 00:29:42.510 00:45:08 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:42.510 00:45:08 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:29:42.510 00:45:08 -- pm/common@17 -- $ local monitor 00:29:42.510 00:45:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:42.510 00:45:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:42.510 00:45:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:42.510 00:45:08 -- pm/common@21 -- $ date +%s 00:29:42.510 00:45:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:42.510 00:45:08 -- pm/common@21 -- $ date +%s 00:29:42.510 00:45:08 -- pm/common@25 -- $ sleep 1 00:29:42.510 00:45:08 -- pm/common@21 -- $ date +%s 00:29:42.510 00:45:08 -- pm/common@21 -- $ date +%s 00:29:42.510 00:45:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715726708 00:29:42.510 00:45:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715726708 00:29:42.510 00:45:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715726708 00:29:42.510 00:45:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715726708 00:29:42.510 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715726708_collect-vmstat.pm.log 00:29:42.510 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715726708_collect-cpu-load.pm.log 00:29:42.510 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715726708_collect-cpu-temp.pm.log 00:29:42.510 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715726708_collect-bmc-pm.bmc.pm.log 00:29:43.446 00:45:09 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:29:43.446 00:45:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:29:43.446 00:45:09 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:43.446 00:45:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:43.446 00:45:09 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:43.446 00:45:09 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:43.446 00:45:09 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:43.446 00:45:09 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:43.446 00:45:09 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:43.446 00:45:09 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:43.446 00:45:09 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:43.446 00:45:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:43.446 00:45:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:43.446 00:45:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:43.446 00:45:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:43.446 00:45:09 -- pm/common@44 -- $ pid=1034750 00:29:43.446 00:45:09 -- pm/common@50 -- $ kill -TERM 1034750 00:29:43.446 00:45:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:43.446 00:45:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:43.446 00:45:09 -- pm/common@44 -- $ pid=1034752 00:29:43.446 00:45:09 -- pm/common@50 -- $ kill -TERM 1034752 00:29:43.446 00:45:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:43.446 00:45:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:43.446 00:45:09 -- pm/common@44 -- $ pid=1034754 00:29:43.446 00:45:09 -- pm/common@50 -- $ kill -TERM 1034754 00:29:43.446 00:45:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:43.446 00:45:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:43.446 00:45:09 -- pm/common@44 -- $ pid=1034789 00:29:43.446 00:45:09 -- pm/common@50 -- $ sudo -E kill -TERM 1034789 00:29:43.446 + [[ -n 657910 ]] 00:29:43.446 + sudo kill 657910 00:29:43.454 [Pipeline] } 00:29:43.471 [Pipeline] // stage 00:29:43.476 [Pipeline] } 00:29:43.492 [Pipeline] // timeout 00:29:43.497 [Pipeline] } 00:29:43.512 [Pipeline] // catchError 00:29:43.517 [Pipeline] } 00:29:43.533 [Pipeline] // wrap 00:29:43.538 [Pipeline] } 00:29:43.553 [Pipeline] // catchError 00:29:43.561 [Pipeline] stage 00:29:43.562 [Pipeline] { (Epilogue) 00:29:43.577 [Pipeline] catchError 00:29:43.578 [Pipeline] { 00:29:43.590 [Pipeline] echo 00:29:43.591 Cleanup processes 00:29:43.596 [Pipeline] sh 00:29:43.876 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:43.876 1034901 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:29:43.876 1035123 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:43.888 [Pipeline] sh 00:29:44.167 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:44.167 ++ grep -v 'sudo pgrep' 00:29:44.167 ++ awk '{print $1}' 00:29:44.167 + sudo kill -9 1034901 00:29:44.179 [Pipeline] sh 00:29:44.461 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:52.632 [Pipeline] sh 00:29:52.918 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:52.919 Artifacts sizes are good 00:29:52.935 [Pipeline] archiveArtifacts 00:29:52.943 Archiving artifacts 00:29:53.124 [Pipeline] sh 00:29:53.407 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:53.420 [Pipeline] cleanWs 00:29:53.430 [WS-CLEANUP] Deleting project workspace... 00:29:53.430 [WS-CLEANUP] Deferred wipeout is used... 00:29:53.436 [WS-CLEANUP] done 00:29:53.438 [Pipeline] } 00:29:53.461 [Pipeline] // catchError 00:29:53.473 [Pipeline] sh 00:29:53.758 + logger -p user.info -t JENKINS-CI 00:29:53.765 [Pipeline] } 00:29:53.781 [Pipeline] // stage 00:29:53.786 [Pipeline] } 00:29:53.804 [Pipeline] // node 00:29:53.809 [Pipeline] End of Pipeline 00:29:53.841 Finished: SUCCESS